ITPub博客

首页 > Linux操作系统 > Linux操作系统 > test

test

原创 Linux操作系统 作者:blue_or_white 时间:2011-05-12 22:42:57 0 删除 编辑

In xe "!!!DykeShaw_ch04_indexed "this xe "!!!hardware "chapter, we will review the technology decisions to be made in terms of the hardware required for a RAC cluster. The question of hardware is often neglected by DBAs; however, for a well-tuned application, the potential performance improvements offered by the latest server, interconnect, and storage architectures running the current versions of Linux is greater than those achievable by upgrades or software tuning on previous generations of technology. For this reason, hardware is a crucial topic of consideration for designing the best RAC configurations; this is particularly true for understanding how all of the components operate in conjunction for optimal cluster performance.

Knowing your hardware is also essential for assessing the total cost of ownership of a clustered solution. A proper assessment considers not just the costs of hardware alone, but also the significant potential savings offered in software acquisition and maintenance by selecting and sizing the components correctly for your requirements. In a RAC environment, this knowledge is even more important because, with a requirement for increased capacity, you are often presented with a decision between adding another node to an existing cluster and replacing the cluster in its entirety with updated hardware. You also have the choice between a great number of small nodes and a small number of large nodes in the cluster. Related to how you make this choice is how the network communication between the nodes in the cluster is implemented, so this chapter will also discuss the implementation of input/output (I/O)xe "input?FS?output (I?FS?O)" on the server itself in the context of networking. Knowing the hardware building blocks of your cluster is fundamental to making the correct decisions; it can also assist you in understanding the underlying reasons of how to configure the Linux operating system to achieve optimal Oracle RAC performance for installing and configuring Linux (see Chapter 6 for more details).

In addition to the Linux servers themselves, a crucial component of any RAC configuration is the dedicated storage array separate from any of the individual nodes in the cluster upon with the Oracle database is installed and shared between the nodes (see in Chapter 2); therefore, we also cover the aspects of storage I/O relating to RAC. In the context of RAC, I/O relates to the reads and writes performed on a disk subsystem, irrespective of the protocols used. The term storage encompasses all aspects relating to this I/O that enable communication of the server serving as the RAC node with nonvolatile disk.

Considering hardware presents a challenge in that, given the extraordinary pace of development in computing technology, reviewing the snapshot of any particular cluster configuration, in time, will soon be made obsolete by the next generation of systems and technologies. So, rather than focus on any individual configuration, we will review the general areas to consider when purchasing hardware for RAC that should remain relevant over time. With this intent, we will also refrain from directly considering different form. factors, such as blade or rack-mounted servers. Instead, we will consider the lower-level technology that often lies behind a form. factor decision.

The aim of this chapter is to provide a grounding to build a checklist when selecting a hardware platform. for RAC on Linux. This will enable you to make an optimal choice. However, one factor that should be abundantly clear before proceeding is that this chapter will not tell you precisely what server, processor, network interconnect, or storage array to purchase. No two Oracle environments are the same; therefore, a different configuration may be entirely applicable to each circumstance.

Oracle Availability

Before xe "!!!hardware :Oracle availability for "beginning the selection of a hardware platform. to run Oracle RAC on Linux, your first port of call should be Oracle itself, to identify the architectures on which Oracle releases the Oracle database for Linux with the RAC option.

At the time of writing, the following four architectures using Oracle terminology have production releases of the Oracle Database on Linux:

·         x86: A standard 32-bit Intel compatible x86 processor

·         x86-64: A 64-bit extended x86 processor (i.e., Intel EM64T, AMD64)

·         Itanium: The Intel Itanium processor

·         Power: The IBM Power processor

·         zSeries: The IBM mainframe

In this chapter, our focus is on the first two architectures because of the availability of Oracle Enterprise Linux for these platforms as discussed in Chapter 1, and the additional availability of Oracle Database 11g Release 2.

In addition to simply reviewing the software availability, we also recommend viewing the RAC Technologies Matrix for Linux Platforms technology matrix to identify platform-specific information for running Oracle RAC in a particular environment. With the advent of the most recent generation of online Oracle support called My Oracle Supportxe "My Oracle Support", Oracle Product Certification Matrices are no longer available for public access without a support subscription. It is therefore necessary to have a valid login at https://support.oracle.com to access the RAC Technologies Matrix.

If you have a valid My Oracle Support login at the top-level menu, click the More... tabxe "More?3PS? tab", followed by Certifications.

On the Certification Information pagexe "Certification Information page", enter the search details in the dropdown menus under the Find Certification Information headingxe "Find Certification Information heading". For example, under Product Line, select Oracle Database Products. Under both Product Family and Product Area, select Oracle Database. Under Product, select Oracle Server – Enterprise Edition. And under Product Release, select 11gR2RAC. Leave the other options at their default selections and press the Search button to display the Certification Information by Product and Platform. for Oracle Server - Enterprise Edition. Select the Certified link next to the platform. of interest to display the Certification Detail. Now click Certification Notes, and the page displayed will include a link to a RAC Technologies Compatibility Matrix (RTCM)xe "RAC Technologies Compatibility Matrix (RTCM)" for Linux Clusters. These are classified into the following four areas:

·         Platform. Specific Information on Server/Processor Architecture

·         Network Interconnect Technologies

·         Storage Technologies

·         Cluster File System/Volume Manager

It is important to note that these technology areas should not be considered entirely in isolation. Instead, the technology of the entire cluster should be chosen in a holistic manner. For example, the importance of the storage-compatibility matrix published by the storage vendors you are evaluating is worth stressing. Many storage vendors perform. comprehensive testing of server architectures running against their technology, often including Oracle RAC as a specific certification area. To ensure compatibility and support for all of your chosen RAC components, the servers should not be selected completely independently of the storage—and vice versa.

An additional subject you’ll want to examine for compatible technology for RAC is cluster software, which, as a software component, is covered in detail in Chapter 8.

In this chapter, we will concentrate on the remaining hardware components and consider the selections of the most applicable server/processor architecture, network interconnect, and storage for your xe "!!!hardware :Oracle availability for "requirements.

Server Processor Architecture

The xe "!!!processor architecture "xe "!!!hardware :processor architecture "core components of any Linux RAC configuration are the servers themselves that act as the cluster nodes. These servers all provide the same service in running the Linux operating system, but do so with differing technologies. One of those technologies is the processor, or CPU.

As you have previously seen, selecting a processor on which to run Oracle on an Oracle Linux supported platform. presents you with two choices. Table 4-1 shows the information gleaned from the choices’ textual descriptions in the Oracle technology matrix.

Table 4-1. Processor Architecture Information

Server/Processor Architecture

Processor Architecture Details

Linux x86

Support on Intel and AMD processors that adhere to the 32-bit x86 architecture.

Linux x86-64

Support on Intel and AMD processors that adhere to the 64-bit x86-64 architecture. 32-bit Oracle on x86-64 with a 64-bit operating system is not supported. 32-bit Oracle on x86-64 with a 32-bit operating system is supported.

x86 Processor Fundamentals

Although xe "!!!x86:processor architecture "xe "!!!processor architecture :x86 "xe "!!!hardware :processor architecture :x86 "the processor architecture information in Table 4-1 describes two processor architectures, the second, x86-64 is an extension to the instruction set architecture (ISA)xe "instruction set architecture (ISA)" of x86. The x86 architecture is a complex instruction set computer (CISC)xe "complex instruction set computer (CISC)" architecture and has been in existence since 1978, when Intel introduced the 16-bit 8086 CPU. The de facto standard for Linux systems is x86 (and its extension, x86-64) because it is the architecture on which Linux evolved from a desktop-based Unix implementation to one of the leading enterprise-class operating systems.  As detailed later in this section, all x86-64 processors support operation in 32-bit or 64-bit mode.

Moore’s Law is the guiding principle to understand how and why newer generations of servers continue to deliver near exponential increases in Oracle Database performance at reduced levels of cost. Moore’s Law is the prediction dating from 1965 by Gordon Moorexe "Moore, Gordon" that, due to innovations in CPU manufacturing process technology, the number of transistors on an integrated circuit can be doubled every 18 months to 2 years. The design of a particular processor is tightly coupled to the silicon process on which it is to be manufactured. At the time of writing, 65nm (nanometer), 45nm, and 32nm processes are prevalent, with 22nm technologies in development. There are essentially three consequences of a more advanced silicon production process:

·         The more transistors on a processor, the greater potential there is for the CPU design to utilize more features: The most obvious examples of this are multiple cores and large CPU cache sizes, the consequences of which you’ll learn more about later in this chapter.

·         For the same functionality, reducing the processor die size reduces the power required by the processor: This makes it possible to either increase the processor clock speed, thereby increasing performance; or to lower the overall power consumption for equivalent levels of performance.

·         Shrinking the transistor size increases the yield of the microprocessor production process: This lowers the relative cost of manufacturing each individual processor.

As processor geometries shrink and clock frequencies rise, however, there are challenges that partly offset some of the aforementioned benefits and place constraints on the design of processors. The most important constraint is that the transistor current leakage increases along with the frequency, leading to undesired increases power consumption and heat in return for gains in performance. Additionally, other constraints particularly relevant to database workloads are memory and I/O latency failing to keep pace with gains in processor performance. These challenges are the prime considerations in the direction of processor design and have led to features such as multiple cores and integrated memory controllers to maintain the hardware performance improvements that benefit Oracle Database implementations. We discuss how the implications of some of these trends require the knowledge and intervention of the DBA later in this chapter.

One of the consequences of the processor fundamentals of Moore’s Law in an Oracle RAC environment is that you must compare the gains from adding additional nodes to an existing cluster on an older generation of technology to those from refreshing or reducing the existing number of nodes based on a more recent server architecture. The Oracle DBA should therefore keep sufficiently up-to-date on processor performance that he can adequately size and configure the number of nodes in a cluster for the required workload over the lifetime of the hosted database applications.

Today’s x86 architecture processors deliver high performance with features such as being superscalar, being pipelined, and possessing out-of-order execution; understanding some of the basics of these features can help in designing an optimal x86-architecture based on Oracle RAC environment.

When assessing  processor performance  the clock speed is often erroneously used as a singular comparative measure. The clock speedxe "clock speed", or clock ratexe "clock rate", is usually measured in gigahertz, where 1GHz represents 1 billion cycles per second. The clock speed determines the speed at which the processor executes instructions. However, the CPU’s architecture is absolutely critical to the overall level of performance, and no reasonable comparison can be made based on clock speed alone.

For a CPU to process information, it needs to first load and store the instructions and data it requires for execution. The fastest mode of access to data is to the processor’s registers. A registerxe "register" can be viewed as an immediate holding area for data before and after calculations. Register access can typically occur within a single clock cycle,  for example assume you have a 2GHz CPU: retrieving the data in its registers will take one clock cycle of ½ a billionth of a second (½ a nanosecond). A general-purpose register can be used for arithmetic and logical operations, indexing, shifting, input, output, and general data storage before the data is operated upon. All x86 processors have additional registers for floating-point operations and other architecture-specific features. 

Like all processors, an x86 CPU sends instructions on a path termed the pipelinexe "pipeline" through the processor on which a number of hardware components act on the instruction until it is executed and written back to memory. At the most basic level, these instructions can be classified into the following four stages:

1.  Fetchxe "Fetch stage": The next instruction of the executing program is loaded from memory. In reality, the instructions will already have been preloaded in larger blocks into the instruction cache (we will discuss the importance of cache later in this chapter).

2.  Decodexe "Decode stage": x86 instructions themselves are not executed directly, but are instead translated into microinstructions. Decoding the complex instruction set into these microinstructions may take a number of clock cycles to complete.

3.  Executexe "Execute stage": The microinstructions are executed by dedicated execution units, depending on the type of operation. For example, floating-point operations are handled by dedicated floating-point execution units.

4.  Write-backxe "Write-back stage": The results from the execution are written back to an internal register or system memory through the cache.

This simple processor model executes the program by passing instructions through these four stages, one per clock cycle. However, performance potentially improves if the processor does not need to wait for one instruction to complete write-back before fetching another, and a significant amount of improvement has been accomplished through pipeliningxe "pipelining". Pipelining enables the processor to be at different stages with multiple instructions at the same time; since the clock speed will be limited by the time needed to complete the longest of its stages, breaking the pipeline into shorter stages enables the processor to run at a higher frequency. For this reason, current x86 enterprise processors often have between 10- to 20-stage pipelines. Although each instruction will take more clock cycles to pass through the pipeline, and only one instruction will actually complete on each core per clock cycle, a higher frequency increases the utilization of the processor execution units—and hence the overall throughput.

One of the most important aspects of performance for current x86 processors is out-of-order execution, which adds the following two stages to the simple example pipeline around the execution stage:

·         Issue/schedulexe "Issue?FS?schedule stage": The decoded microinstructions are issued to an instruction pool, where they are scheduled onto available execution units and executed independently. Maintaining this pool of instructions increases the likelihood that an instruction and its input will be available to process on every clock cycle, thereby increasing throughput.

·         Retirexe "Retire stage": Because the instructions are executed out of order, they are written to the reorder buffer (ROB)xe "reorder buffer (ROB)" and retired by being put back into the correct order intended by the original x86 instructions before write-back occurs.

Further advancements have also been made in instruction-level parallelism (ILP)xe "instruction-level parallelism (ILP)" with superscalar architectures. ILP introduces multiple parallel pipelines to execute a number of instructions in a single clock cycle, and current x86 architectures support a peak execution rate of at least three and more typically four instructions per cycle.

Although x86 processors operate at high levels of performance all of the data stored in your database will ultimately reside on disk-based storage. Now assume that your hard-disk drives have an access time of 10 milliseconds. If the example 2GHz CPU were required to wait for a single disk access, it would wait for a period of time equivalent to 20 million CPU clock cycles. Fortunately, the Oracle SGA acts as an intermediary resident in random access memory (RAM)xe "random access memory (RAM)" on each node in the cluster. Memory access times can vary, and the CPU cache also plays a vital role in keeping the processor supplied with instructions and data. We will discuss the performance potential of each type of memory and the influence of the CPU cache later in this chapter. However, with the type of random access to memory typically associated with Oracle on Linux on an industry-standard server, the wait will take approximately between 60 and 120 nanoseconds. The time delay represents 120 clock cycles for which the example 2GHz CPU must wait to retrieve data from main memory.

You now have comparative statistics for accessing data from memory and disk. Relating this to an Oracle RAC environment, the most important question to ask is this: “How does Cache Fusion compare to local memory and disk access speeds?” (One notable exception to this question is for data warehouse workloads as discussed in Chapter 14). A good average receive time for a consistent read or current block for Cache Fusion will be approximately two to four milliseconds with a gigabit-Ethernet based interconnect. This is the equivalent of 4 to 5 million clock cycles for remote Oracle cache access, compared to 120 clock cycles for local Oracle cache access. Typically, accessing data from a remote SGA through Cache Fusion gives you a dramatic improvement over accessing data from a disk. However, with the increased availability of high performance solid-state based storage, Flash PCIe cards, and enterprise storage that utilizes RAM based caching (as discussed later in this chapter), it may be possible that disk-based requests could complete more quickly than Cache Fusion in some configurations. Therefore the algorithms in Oracle 11g Release 2 are optimized so that the highest performing source of data is given preference, rather than simply assuming that Cache Fusion delivers the highest performance in all cases.

Similarly, you may consider supported interconnect solutions with lower latencies such as Infiniband, which you’ll learn more about later in this chapter. In this case, Cache Fusionxe "Cache Fusion" transfers may potentially reduce measurements from milliseconds to microseconds; however, even a single microsecond latency is the equivalent of 2000 CPU clock cycles for the example 2GHz CPU, and it therefore represents a penalty in performance for accessing a remote SGA compared to local memory.

x86-64

The xe "!!!x86-64:processor architecture "xe "!!!processor architecture :x86-64 "xe "!!!hardware :processor architecture :x86-64 "64-bit extension of x86 is called x86-64 but can also be referred to as x64, EM64T, Intel 64, and AMD64; however, the minor differences in implementation are inconsequential, and all of these names can be used interchangeably.

Two fundamental differences exist between x86, 32- and x86-64,  64-bit computing. The most significant is in the area of memory addressability. In theory, a 32-bit system can address memory up to the value of 2 to the power of 32, enabling a maximum of 4GB of addressable memory. A 64-bit system can address up to the value of 2 to the power of 64, enabling a maximum of 16 exabytes, or 16 billion GB, of addressable memory—vastly greater than the amount that could be physically installed into any RAC cluster available today. It is important to note, however, that the practical implementations of the different architectures do not align with the theoretical limits. For example, a standard x86 system actually has 36-bit physical memory addressability behind the 32-bit virtual memory addressability. This 36-bit physical implementation gives a potential to use 64GB of memory with a feature called Page Addressing Extensions (PAE)xe "Page Addressing Extensions (PAE)" to translate the 32-bit virtual addresses to 36-bit physical addresses. Similarly, x86-64 processors typically implement 40-bit or 44-bit physical addressing; this means a single x86-64 system can be configured with a maximum of 1 terabyte or 16 terabytes memory respectively. You’ll learn more about the practical considerations of the impact of the different physical and virtual memory implementations later in this chapter.

In addition to memory addressability, one benefit from moving to 64-bit registers is the processors themselves. With 64-bit registers, the processor can manipulate high-precision data more quickly by processing more bits in each operation.

For general-purpose applications x86-64 processors can operate in three different modes: 32-bit mode, compatibility mode, or 64-bit mode. The mode is selected at boot time and cannot be changed without restarting the system with a different operating system. However, it is possible to run multiple operating systems under the different modes simultaneously within a virtualized environment (see Chapter 5). In 32-bit mode, the processor operates in exactly the same way as standard x86, utilizing the standard eight of the general-purpose registers. In compatibility mode, a 64-bit operating system is installed, but 32-bit x86 applications can run on the 64-bit operating system. Compatibility mode has the advantage of affording the full 4GB of addressability to each 32-bit application. Finally, the processor can operate in 64-bit mode, realizing the full range of its potential for 64-bit applications.

This compatibility is indispensable when running a large number of 32-bit applications developed on the widely available x86 platform. while also mixing in a smaller number of 64-bit applications; however, Oracle’s published certification information indicates that 32-bit Oracle is not supported on a 64-bit version of Linux on the x86-64 architecture. That said, the architecture may be used for 32-bit Linux with 32-bit Oracle or for 64-bit Linux with 64-bit Oracle. The different versions cannot be mixed, though, and compatibility mode may not be used. To take advantage of 64-bit capabilities, full 64-bit mode must be used with a 64-bit Linux operating system, associated device drivers, and 64-bit Oracle—all must be certified specifically for the x86-64 platform.

The single most important factor for adopting 64-bit computing for Oracle is the potential for memory addressability beyond the capabilities of a 32-bit x86 system for the Oracle SGA. Additionally, the number of users in itself does not directly impact whether a system should be 32- or 64-bit. However, a significantly large number of users also depend on the memory handling of the underlying Linux operating system for all of the individual processes, and managing this process address space also benefits from 64-bit memory addressability. For this reason, we recommend standardizing on an x86-64 processor architecture for Oracle RAC installations. In addition, memory developments such as NUMA memory features are only supported in the 64-bit Linux kernel; therefore, the advantages of 64-bit computing are significantly enhanced on a server with a NUMA architecture as discussed later in xe "!!!x86-64:processor architecture "xe "!!!processor architecture :x86-64 "xe "!!!hardware :processor architecture :x86-64 "this xe "!!!x86:processor architecture "xe "!!!processor architecture :x86 "xe "!!!hardware :processor architecture :x86 "chapter.

Multicore Processors and Hyper-Threading

Recall fxe "!!!processor architecture :multicore "xe "!!!multicore processors :processor architecture "xe "!!!hardware :processor architecture :multicore "or a moment the earlier discussion of Moore’s Law in this chapter. Whereas improvements in manufacturing process have produced successive generations of CPUs with increasing performance, the challenges—especially those related to heat and power consumption—have resulted in the divergence of processor design from a focus on ever-advancing clock speeds. One of the most significant of these developments has been the trend towards multicore processors (see Figure 4-1). A multicore processorxe "multicore processor" is one that contains two or more independent execution cores in the same physical processor package or socket.

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/10473097/viewspace-695168/,如需转载,请注明出处,否则将追究法律责任。

上一篇: test
下一篇: 没有了~
请登录后发表评论 登录
全部评论

注册时间:2011-04-30

  • 博文量
    2
  • 访问量
    3796

最新文章