Showing posts with label PT. Show all posts
Showing posts with label PT. Show all posts

Thursday, December 21, 2017

Oracle Mutex

Mutexes are objects that exist within the operating system to provide access to shared memory structures. They are similar to latches, which will be covered in following chapters, as they are serialized mechanisms used to control access to shared data structures within the Oracle SGA.
Serialization provides benefits via mutexes in that they are required to avoid having database objects being read during modification and to provide consistency as part of the relational database ACID (Atomicity, Consistency, Isolation and Durability) model.

Mutexes can be used and defined in various ways. Each data structure within Oracle which is under the protection of a mutex can also have its own mutex such as a parent cursor may have its own separate mutex as well as each child cursor can also have its own mutex. Structures within Oracle can be protected by multiple mutexes so that each mutex will protect a different area of the database structure. While latches and mutexes are similar regarding both being serialization mechanisms and providing data protection, mutexes differ from latches in the following ways.

Mutexes are smaller and operate faster than latches because they contain fewer instructions than those in a latch get operation. Secondly, mutexes take up fewer memory resources and space than latches. Mutexes also provide less chance of contention within the database than latches do which means that mutexes provide greater protection of data structures and flexibility than latches.
Another key feature of mutexes is that they can be referenced across many sessions concurrently by using shared mode. Mutexes also function in a dual role as both a serialization item similar to a latch and also as pin operator by preventing objects from aging out of the Oracle memory buffers. Since both latches and mutexes are independent mechanisms, a process within Oracle can hold both a latch and mutex at the same time.

Oracle 10g release 2 and beyond replaced some latch mechanisms with the mutex approach, claiming that they are faster and more efficient than traditional locking mechanisms.  To improve cursor execution speed and hard pare time within the library cache, mutexes replace library cache latches and library cache pins. Oracle claims that mutexes are faster and use less CPU, which is important for CPU-bound database where large data buffers remove I/O as a primary source of contention. 
Oracle also claims that a mutex allows for improved concurrency over the older latch mechanism because the code path is shorter.   

Tuesday, January 17, 2012

Oracle Database Performance Results with Smart Flash Cache on Sun SPARC Enterprise Midrange Server

This article examines the improvements to Oracle database performance that were observed by adding smart flash cache to the configuration. Measurements were made using the iGEN-OLTP 1.6 benchmark, which was formulated to simulate a lightweight Global Order System. Tests were run with and without the smart flash cache, each time varying the size of the SGA buffer cache at 10%, 16%, and 20% the size of the database. The results demonstrate that an intelligent database that knows how to efficiently take advantage of flash-based storage can experience significant improvements in performance.
Contents

Introduction
Database Smart Flash Cache
Benchmark Description
System Configuration Details
Test Results
Conclusion
Appendix: Oracle Initialization File init.ora

Introduction

Today’s complex business applications typically house massive volumes of data and serve large numbers of users—a trend that drives performance requirements that are increasingly difficult to attain. To achieve fast response times for data-intensive applications, systems must be able to access data rapidly and transfer it quickly from storage to compute resources for processing. Many data-driven applications suffer from long latencies and slow response times due to I/O bottlenecks that limit throughput between storage and servers. Traditional remedies, such as increasing memory size or short-stroking disk drives by placing data on outer sectors, are costly and power intensive, but they can help up to a point. However, the problem remains due to fast CPUs processing data in nanoseconds and disk drives delivering data in milliseconds.

As flash technology moves into the enterprise, it holds promise for accelerating application performance, reducing bottlenecks, and helping to lower data center energy consumption. With a layer of flash-based storage in the form of solid-state drives (SSDs) between traditional disk media and host processors, today’s powerful CPUs can experience less idle time waiting for I/O operations to complete. SSDs deliver data in microseconds and can thus contribute to major improvements in application performance. The question still remains, however, about how to take advantage of flash technology intelligently and efficiently without imposing the additional overhead of actively managing and constantly positioning data into the proper storage tier. To address this challenge, Oracle created the Database Smart Flash Cache feature, which aims to take advantage of this new storage tier, while reducing complexity and without over-burdening administrators in the data center.
Database Smart Flash Cache

Oracle's Database Smart Flash Cache is available in Oracle Database 11g Release 2 for both Oracle Solaris and Oracle Enterprise Linux. It intelligently caches data from the Oracle Database, replacing slow mechanical I/O operations to disk with much faster flash-based storage operations. The Database Smart Flash Cache feature acts as a transparent extension of the database buffer cache using solid-state drive (SSD) or “flash” technology. The flash acts as a level-two cache to the database buffer cache. If a process doesn’t find the block it needs in the buffer cache, it performs a physical read from the level-two SGA buffer pool residing on flash. The read from flash will be quite fast, in the order of microseconds, when compared to performing a physical read operation from a traditional hard disk drive (HDD), which takes milliseconds.

Database Smart Flash Cache with flash storage gives administrators the ability to greatly improve the performance of Oracle databases by reducing the required amount of traditional disk I/O at a much lower cost than adding an equivalent amount of RAM. Software intelligence determines how and when to use the flash storage, and how best to incorporate flash into the database as part of a coordinated data caching strategy to deliver improved performance to applications.

Database Smart Flash Cache technology allows frequently accessed data to be kept in very fast flash storage while most of the data is kept in very cost-effective disk storage. This happens automatically without you taking any action. Oracle Database Smart Flash is smart because it knows when to avoid trying to cache data that will never be reused or will not fit in the cache.

Random reads against tables and indexes are likely to have subsequent reads and normally are cached and have their data delivered from the flash cache, if the data is not found in the buffer cache. Scans, or sequentially reading tables, generally are not cached since sequentially accessed data is unlikely to be subsequently followed by reads of the same data. Write operations are written through to the disk and staged back to cache if the software determines they are likely to be subsequently re-read. Knowing what not to cache is of great importance for realizing the performance potential of the cache. For example, the software avoids caching blocks when writing redo logs, writing backups, or writing to a mirrored copy of a block. Since these blocks will not be re-read in the near term, there is no reason to devote valuable cache space to these objects or blocks.

In addition, Oracle allows you to provide directives at the database table, index, and segment level to ensure that Database Smart Flash Cache is used where desired. Tables can be moved in and out of flash with a simple command, without the need to move the table to different tablespaces, files, or LUNs, as is typically done in traditional storage with flash disks. Only the Oracle Database has this functionality and understands the nature of all the I/O operations taking place on the system. Having knowledge of the complete I/O stack allows optimized use of Database Smart Flash Cache to store only the most frequently accessed data.

All this functionality occurs automatically without administrator configuration or tuning. This paper discusses the advantages of using Database Smart Flash Cache when running an in-house developed OLTP benchmark on the Sun SPARC Enterprise M4000 server from Oracle using Oracle Solaris 10 and the Sun Flash Accelerator F20 PCIe Card. The following topics are covered in the remaining sections of the paper:

Detailed description the iGEN-OLTP benchmark
System configuration details
Capabilities of the Sun SPARC Enterprise M4000 and M5000 servers
Sun Flash Accelerator F20 PCIe Card features
Results of the benchmark, including the impact of Database Smart Flash Cache

Benchmark Description

The iGEN-OLTP 1.6 benchmark is an internally developed transaction processing database workload. This workload simulates a lightweight Global Order System, and it was developed from a variety of customer workloads. It has a high degree of concurrency and stresses database commit operations. It is completely random in table row selections and, therefore, it is difficult to 'localize' or optimize the SQL processing. The transactions used in the iGEN benchmark require more computational work when compared to the transactions used in a TPC-C benchmark.

The database has 1.25 million customers residing in it and is approximately 50 GB in size. It consists of six tables: customer, location, industry, product, order, and activity. Each table has no more than six columns, and each has an index.

The application executes five transactions: light, medium, average, DSS, and heavy. The transactions comprise various SQL statements: read-only selects, joins, averages, updates, and inserts. All tests are completed using a mix of these transactions. The description and distribution mix of these transactions are shown in the Table 1.
Table 1. iGEN-OLTP 1.6 TRANSACTIONS DESCRIPTION AND DISTRIBUTION
TRANSACTION MIX (PERCENTAGE) DESCRIPTION
Light 16% 1 select for update, 1 select, and 1 update
Medium 35% 2 selects for update, 1 select, and 1 update
Average 6% 1 select for update and 1 compute average from another select
DSS 11% 1 select for update and 1 compute sum from join on 3 tables
Heavy 30% 1 select for update, 1 select, 1 update, and 1 insert

The client driver program, which could be run in a middle tier application server, is designed as a Java multithreaded load generator program with each thread making one connection to the database server via the JDBC program API to simulate a client connection.

The load generator driver requires the following user-supplied options for execution:

User count or connection count
Time for ramp-up to allow all users to establish connections to the database, typically 60 seconds
Time to run in steady state whereby all users start issuing transactions to the database
Think time, which is the time to sleep between individual transactions and is used by each client, typically 100 milliseconds

A number of iGEN-OLTP benchmark runs were performed. Each workload ran for five minutes in steady state with a fixed number of connections for each test. The core metrics measured are transactions per minute (TPM), number of users supported, and average response time. The average response time for any of the transactions must be less than 100 milliseconds for a run to be considered valid.
System Configuration Details

The benchmark was run using Oracle Database 11g Release 2. The hardware and software configuration for both the database server and the load generator are described below:
Database Server: Sun SPARC Enterprise M4000 Server

The database server hardware specifications consisted of the following:

CPU: 4 x SPARC64 VI 2.15-GHz dual-core processor with 2 strands per core
Cache memory:
L1 cache: 128 KB (instruction) / 128 KB (data) per core
L2 cache: 5 MB shared per processor
Memory size: 16384 megabytes
Network: 2 × Gigabit Ethernet ports, 2 × 10/100 Ethernet ports for accessing CLI, and browser-based interface for management
PCI-X and PCI Express (PCIe): 1 × PCI-X slot and 4 × PCIe slots per I/O tray
Disks: 2 internal 73-GB SAS disk drives
One Sun Flash Accelerator F20 PCIe Card with 4 SSDs in a special form factor on board, each with 24 GB of addressable capacity

The following software versions were used:

Operating System: Oracle Solaris 10 09/10
Database software: Oracle 11g Release 2 Enterprise Edition for Solaris Operating System (SPARC) (64-bit)
Database configuration: See appendix A for the Oracle initialization file, init.ora

Oracle Automatic Storage Management was used to create two disk groups with normal redundancy and without any filesystems. One disk group was for data files and was called "DATA"; the other disk group consisted of the flash storage and was called "FLASH."
iGEN-OLTP Load Generator: Sun Fire X4440 Server

The iGEN-OLTP load generator server hardware specifications consisted of the following:

CPU: 4 x AMD Opteron 2.3-GHz quad-core processor
L2 cache memory: 512 KB per processor core
Memory size: 16384 megabytes
Network: Four 10/100/1000 Base-T Ethernet ports and one dedicated 10/100Base-T Ethernet port for management
PCI-X and PCIe: One PCIe x16 slot, four PCIe x 8 slot, and one PCIe x 4 slot
Disks: Eight 2.5" 73-GB SAS internal hot-swap disks drives

The operating system software version used was Oracle Solaris 10 05/08.
Sun SPARC Enterprise M4000 and M5000 Midrange Servers

Oracle’s Sun SPARC Enterprise M4000 and M5000 servers are highly reliable, easy to manage, and vertically scalable systems with many of the benefits of traditional mainframes— without the associated cost or complexity. These midrange enterprise servers were designed to be extremely flexible, scalable, and robust with mainframe-class reliability and availability capabilities. These servers feature a balanced, scalable symmetric multiprocessing (SMP) design that uses the latest generation of SPARC64 processors connected to memory and I/O by a high-speed, low-latency system interconnect that delivers exceptional throughput to applications.

Architected to reduce planned and unplanned downtime, the Sun SPARC Enterprise M4000 and M5000 servers include mainframe-class reliability, availability, and serviceability (RAS) capabilities to avoid outages, reduce recovery time, and improve overall system uptime. These servers can deliver enterprise-class service levels for mission critical workloads, supporting medium to large databases, business processing applications (ERP, SCM, CRM, OLTP), BIDW (database, datamart, DSS), scientific/engineering applications, and consolidation/virtualization projects.
Capabilities Overview

The Sun SPARC Enterprise M4000 server can be configured with up to four dual-core SPARC64 VI processors or four quad-core SPARC64 VII processors, with two simultaneously executing threads per core (each thread is seen as a processor by the operating system) and up to 256-GB error correcting code (ECC) memory in a dense, rack-optimized, six rack-units (RU) system.

The Sun SPARC Enterprise M5000 server offers double the number of cores and memory in a 10 RU system. The SPARC64 processors incorporate the symmetric multiprocessing (SMP) architecture, which allows any CPU to access any memory board on the system regardless of location. These processors also feature advanced multithreading technologies that improve system performance by maximizing processor utilization.

In both servers, a high-performance system backplane interconnects processors and local memory with the I/O subsystem. The system interconnect or bus was designed to minimize latency and provide maximum throughput, regardless of whether the workload is compute, I/O, or memory intensive. Implemented as point-to-point connections that utilize packet-switched technology, this interconnect delivers 32 GB/second of peak bandwidth in the Sun SPARC Enterprise M4000 server and 64 GB/second in the Sun SPARC Enterprise M5000 server.
Dynamic Domains and Dynamic Reconfiguration

The Sun SPARC Enterprise M4000 and M5000 servers can be partitioned into two and four independent Dynamic Domains, respectively. These Dynamic Domains are electrically isolated partitions, each running independent instances of Oracle Solaris. These servers feature advanced resource control supporting the allocation of sub-system board resources, including CPUs, memory, and I/O trays, either in their entirety to one domain or divided logically between domains. Domains are used for server consolidation and to run separate parts of a solution, such as an application server, Web server, and database server. Hardware or software failures in one Dynamic Domain do not affect applications running in other domains.

Dynamic Reconfiguration technology provides added value to Dynamic Domains by providing administrators with the ability to shift computing resources between domains in accordance with changes in the workload without taking the system offline. This technology enhances system availability by allowing administrators to perform maintenance, live upgrades, and physical changes to system hardware resources, while the server continues to execute applications and without the need for system reboots.
Advanced Reliability, Availability, and Serviceability Features

Specifically designed to support complex, network computing solutions and stringent high-availability requirements, the Sun SPARC Enterprise M4000 and M5000 servers include redundant and hot-swap system components, diagnostic and error recovery features throughout their design, and built-in remote management features.

The Sun SPARC Enterprise M4000 and M5000 servers feature important technologies that detect and correct failures early and keep faulty components from causing repeated downtime. This advanced architecture fosters high levels of application availability and rapid recovery from many types of hardware faults, often with no impact to users or system functionality.

The following features work together to raise application availability:

End-to-end data protection detects and corrects errors throughout the system, ensuring complete data integrity. This includes support for error marking, instruction retry, L1 and L2 cache dynamic degradation, up to 128-GB error-correcting code (ECC) protection, total SRAM and register protection, ECC and Extended ECC protection for memory, and optional memory mirroring.
Mainframe-class fault isolation helps the server isolate errors within component boundaries and offline only the relevant chips instead of the entire component. This feature applies to CPUs, memory access controllers, crossbar ASICs, system controllers, and I/O ASICs. For example, persistent CPU soft errors can be resolved by automatically offlining either a thread, core, or entire CPU. Similarly, memory pages can be taken offline proactively in response to multiple corrections for data access for a specific memory DIMM.
Dynamic CPU resource deallocation provides processor fault detection, isolation, and recovery. This feature dynamically reallocates CPU resources to an operational system using Dynamic Reconfiguration without interrupting the applications that are running.
Periodic component status checks are performed to determine the status of many system devices to detect signs of an impending fault. Recovery mechanisms are triggered to prevent system and application failure.

Reliability and Availability Features of Oracle Solaris 10

The ability to rapidly diagnose, isolate, and recover from hardware and application faults is essential to increase reliability and availability of the system. In addition to the error detection and recovery features provided by the hardware, Oracle Solaris 10 takes a big leap forward in self-healing with the introduction of Oracle Solaris Fault Manager and Oracle Solaris Service Manager technology.

Oracle Solaris Fault Manager promotes availability by automatically diagnosing faults in the system and initiating self-healing actions to help prevent service interruptions. The Oracle Solaris Fault Manager diagnosis engine produces a fault diagnosis once discernible patterns are observed from a stream of incoming errors. Following error identification, the Oracle Solaris Fault Manager provides information to agents that know how to respond to specific faults. Problem components can be configured out of a system before a failure occurs—and in the event of a failure, this feature initiates automatic recovery and application re-start. For example, an agent designed to respond to a memory error might determine the memory addresses affected by a specific chip failure and remove the affected locations from the available memory pool.

Oracle Solaris Service Manager converts the core set of services packaged with the operating system into first-class objects that administrators can manipulate with a consistent set of administration commands, including start, stop, restart, enable, disable, view status, and snapshot. Oracle Solaris Service Manager unifies service control by managing the interdependency between services, ensuring that they are started (or restarted following service failure) in the appropriate order. It is integrated with Oracle Solaris Fault Manager and is activated in response to fault detections.

With Oracle Solaris 10, business-critical applications and essential system services can continue uninterrupted in the event of software failures, major hardware component breakdowns, and software misconfiguration problems.
Sun Flash Accelerator F20 PCIe Card

Oracle’s Sun Flash Accelerator F20 PCIe Card is an innovative, low-profile PCIe card that supports onboard, enterprise-quality, solid-state based storage. The Sun Flash Accelerator F20 PCIe Card delivers a tremendous performance boost to applications using flash storage technology—up to 100 K I/O operations per second (IOPS) for random 4-K reads, compared to about 330 IOPS for traditional disk drives—in a compact PCIe form factor. Thus, a single Sun Flash Accelerator F20 PCIe Card delivers about the same number of IOPS as three hundred 15-K RPM disk drives. At the same time, it consumes a fraction of the power and space that those disk drives require. Adding one or more cards to an Oracle rack mounted server turns virtually any Sun x86 or UltraSPARC processor-based system into a high-performance storage server.

The Sun Flash Accelerator F20 PCIe Card, shown in Figure 1, combines four flash modules—known as Disk on Module (DOM) units—each containing 24 GB of enterprise-quality SLC NAND flash and 64 MB of dynamic random access memory (DRAM), for a total of 96 GB flash and 256 MB DRAM per PCIe card. Each card also incorporates a supercapacitor module that provides enough energy to flush DRAM contents to persistent flash storage in the event of a sudden power outage, which helps to enhance data integrity.
Sun Flash Accelerator

Figure 1: Sun Flash Accelerator F20 PCIe Card
Sun Flash Accelerator F20 PCIe Card Highlights

The Sun Flash Accelerator F20 PCIe Card provides these benefits:

Low latency. Flash technology can complete an I/O operation in microseconds, placing it between hard disk drives (HDDs) and DRAM in terms of latency. Because flash technology contains no moving parts, it avoids the long seek times and rotational latencies inherent with traditional HDD technology. As a result, data transfers to and from the onboard flash devices are significantly faster than what electromechanical disk drives can provide. A single Sun Flash Accelerator F20 PCIe Card can provide up to 100 K IOPS for read operations, compared to mere hundreds of IOPS for HDDs.
Enterprise-level reliability. Sun engineers worked closely with NAND manufacturers to make specific reliability enhancements to the flash devices. These enterprise-quality SLC NAND devices exhibit greater endurance than commercially available flash components used in consumer products, such as MP3 players and digital cameras, and they are rated for more than 2 million hours MTBF (mean time between failures), which is greater than most disk drives. The onboard flash devices are managed by a flash memory controller. Each controller provides internal RAID, sophisticated wear leveling, error correction code (ECC), and bad block mapping to provide the highest level of longevity and endurance. Each flash module includes an additional 8 GB (or 25 percent) of reserved internal storage that is used by the controller to replace worn out blocks. In addition, a supercapacitor unit flushes DRAM contents to flash storage if a power loss occurs. Even if a supercapacitor fails, the design maintains data integrity because it automatically enables write-through mode.
Simplified management. The Sun Flash Accelerator F20 PCIe Card presents itself as an HBA to the server, because the four DOMs are treated as four separate 24-GB disks. OS commands that manage disk drives apply equally to the DOM storage modules, so no special device drivers are required, and no re-compilation of applications is necessary. In addition, firmware upgrades for the flash controller can be easily downloaded and applied as needed.
Flexible configurations. The Sun Flash Accelerator F20 PCIe Card can be deployed in virtually any qualified Sun server that accepts a PCIe-based HBA.
Leading eco-responsibility. The solid-state DOMs operate at low power (approximately 2 watts for each 24-GB module), which is especially low compared to disk devices (typically around 12 watts each). The card itself consumes about 16.5 watts during normal operation.

While several other flash-based storage solutions exist today, the Sun Flash Accelerator F20 PCIe Card provides the performance benefit of flash storage in a convenient and compact low-profile PCIe form factor. Occupying a single slot on the motherboard, the card’s dense PCIe form factor is particularly beneficial for existing servers with a limited number of available disk slots or when you do not wish to replace existing disk drives with SSDs. And since it is a PCIe card, the I/O operations do not have to suffer from disk controller limitation.
Test Results

We ran several tests on the Sun SPARC Enterprise M4000 server, each time varying the size of the SGA buffer cache at 10%, 16%, and 20% the size of the database. In each of these tests, the same workload was used in testing with flash and without flash. The size of the flash storage was also varied. A test was considered to be valid only if it completed with an average response time less than 100 milliseconds. The results obtained from the various runs are detailed below.
Results with SGA Buffer Cache Size 10% of Database
Table 2. Results with SGA Buffer Cache Size 10% of Database
SGA buffer cache size = 5 GB NO Flash WITH FLASH 15 GB WITH FLASH 20 GB
Number of users 400 570 575
Maximum qualified throughput: TPM 56659.17 68102.67 73070.83
Avg. response time in msec
(should be <0 .1="" 0.070="" 0.089="" 0.09="" br="" sec="">
Results with Database Smart Flash Cache with 15-GB SGA Size

Using Database Smart Flash Cache with 3 times the original SGA buffer cache size (15 GB) yielded the following improvement over the original run without flash:

42.5% increase in number of users
20% more TPM

Results with Database Smart Flash Cache with 20-GB SGA Size

Using Database Smart Flash Cache with 4 times the original SGA buffer cache size (20 GB) yielded the following improvement over the original run without flash:

43.75% increase in number of users
29% more TPM

These results are displayed in the graphs in Figure 2.

Figure 2: Test Results with 5-GB SGA
Results with SGA Buffer Cache Size 16% of Database
Table 3. Results with SGA Buffer Cache Size 16% of Database
SGA buffer cache size = 8 GB NO Flash WITH FLASH 16 GB WITH FLASH 20 GB
Number of users 480 575 600
Maximum qualified throughput: TPM 64479.50 72437.33 75576.50
Avg. response time in msec (should be <0 .1="" 0.079="" 0.089="" br="" sec="">
Results with Database Smart Flash Cache 16 GB SGA Size

Using Database Smart Flash Cache with 2 times the original SGA buffer cache size (16 GB) yielded the following improvement over the original run without flash:

20% increase in number of users
12.3% more TPM

Results with Database Smart Flash Cache 20 GB SGA Size

Using Database Smart Flash Cache with 2.5 times the original SGA buffer cache size (20 GB) yielded the following improvement over the original run without flash:

25% increase in number of users
17.2% more TPM

These results are displayed in the graphs in Figure 3.
Figure 3

Figure 3: Test Results with 8-GB SGA
Results with SGA Buffer Cache Size 20% of Database
Table 4. Results with SGA Buffer Cache Size 20% of Database
SGA buffer cache size = 10 GB NO Flash WITH FLASH 20 GB WITH FLASH 22 GB
Number of users 575 590 595
Maximum qualified throughput: TPM 75175.83 72571.67 75901.00
Avg. response time in msec
(should be <0 .1="" 0.083="" 0.086="" 0.087="" br="" sec="">
Results with Database Smart Flash Cache 20GB SGA Size

Using Database Smart Flash Cache with two times the original SGA buffer cache size (20 GB) yielded the following improvement over the original run without flash:

2.6% increase in number of users
3.5% less TPM

Results with Database Smart Flash Cache 22GB SGA Size

Using Database Smart Flash Cache with 2.2 times the original SGA buffer cache size (22 GB) yielded the following improvement over the original run without flash:

3.4% increase in number of users
0.9% more TPM

In this case, most of the operations are already cached, and the Database Smart Flash Cache size needs to be at least four times the size of SGA to make a difference.

These results are displayed in the graphs in Figure 4.
Figure 4

Figure 4: Test Results with 10-GB SGA
Conclusion

A key metric for any OLTP database application is the number of transactions that can be executed over a given period of time. In addition to the number of transactions per minute (TPM), it is also imperative that as many users as possible can be served within acceptable response times. Otherwise, organizations would have to deploy many more systems to provide a positive end-user experience.

The results from Oracle’s iGEN-OLTP benchmark tests, which are shown in the tables above, suggest that when the SGA buffer cache size in memory is equal to 10% of the total database size, the system can scale to support 43% more users and 29% greater TPM than on a Sun SPARC Enterprise M4000 server without Database Smart Flash Cache technology. This was achieved by taking advantage of less expensive, reliable, and more power efficient flash-based storage at four times the capacity of the SGA buffer size. These results are equivalent to those obtained when doubling the SGA buffer cache size in memory to 20% of the total database size and without flash-based storage, which is a more expensive solution due to the cost of additional memory and power requirements.

A Sun SPARC Enterprise M5000 server was not available for this test. However, this larger server offers double the number of cores and memory and twice the system bus bandwidth of the Sun SPARC Enterprise M4000 server. Because of the proven scalability of Oracle Solaris 10 and Oracle Database, it is possible to extrapolate that the results on the Sun SPARC Enterprise M5000 server would be just as good, if not better, enabling it to support double the number of users and TPM in each of the tests described above.

Database Smart Flash Cache technology from Oracle thus provides scalability to meet demands placed by ever larger workloads and increasing number of users, delivering breakthrough advantages for application performance. Just by adding the Sun Accelerator F20 PCIe card to an existing server, and without downloading any special driver or re-compiling any applications, an existing setup can be scaled to support many more users, handle many more transactions, accelerate application performance, increase business productivity, improve ROI, and enhance the end-users’ experience.

By using the Sun SPARC Enterprise M4000 and M5000 servers running Oracle Solaris 10 with the Sun Accelerator F20 PCIe card and Oracle Database 11g Release 2, an intelligent database that knows how to efficiently take advantage of flash-based storage, you can experience significant breakthroughs in performance and business agility. This computing environment also supports high service levels with mainframe-class reliability, availability, and serviceability features the in Sun SPARC Enterprise M4000 and M5000 servers as well as the highly reliable Sun Flash Modules in the Sun Accelerator F20 PCIe card that have been tested and certified for 2 million hours MTBFs. These innovations from Oracle bring enterprise computing even closer to the ideal of complete automation in the data center.
References

For more information, visit the Web resources listed in Table 5.
Table 5. Web resources for further information
Web Resource Description Web Resource URL
Sun Flash Accelerator F20 PCIe Card www.oracle.com/us/products/servers-storage/storage/disk-storage/043966.html
Sun SPARC Enterprise M4000 Server www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/m-series/031646.htm
Sun SPARC Enterprise M5000 Server www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/m-series/031732.htm
Oracle Solaris www.oracle.com/solaris

Appendix: Oracle Initialization File init.ora

############################################################
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
############################################################
_array_update_vector_read_enabled = TRUE
parallel_max_servers = 64
parallel_min_servers = 0
db_writer_processes = 3
_imu_pools =0
_in_memory_undo =FALSE
_smm_advice_enabled =FALSE
_undo_autotune =FALSE
thread = 1
db_block_checksum = false
db_cache_size = 5000m
db_file_multiblock_read_count = 128
db_files = 1023
dml_locks = 8000
global_names = FALSE
java_pool_size = 20m
job_queue_processes = 4
log_buffer = 4194304
log_checkpoints_to_alert = TRUE
nls_date_format = DD-MON-RR
nls_numeric_characters = ".,"
nls_sort = binary
nls_language = american
nls_territory = america
replication_dependency_tracking = FALSE
shared_pool_size = 1200m
shared_pool_reserved_size = 150m
cursor_space_for_time = FALSE
sort_area_size = 512000
sort_area_retained_size = 512000
undo_retention = 30
_in_memory_undo=false
undo_management = AUTO
filesystemio_options = setall
_library_cache_advice = FALSE
_smm_advice_enabled = FALSE
db_cache_advice = OFF
_db_mttr_advice = OFF
timed_statistics = TRUE
_trace_files_public=true
cursor_space_for_time = TRUE
transactions_per_rollback_segment = 1
session_cached_cursors = 200
cursor_sharing = similar
_db_block_hash_latches = 65536

###########################################
# Smart Flash Cache fields
###########################################
db_flash_cache_file="+FLASH/test"
db_flash_cache_size=20G

###########################################
# Cache and I/O
###########################################
db_block_size=8192

###########################################
# Cursors and Library Cache
###########################################
open_cursors=3024
###########################################
# Database Identification
###########################################
db_domain=""
db_name=wcb

###########################################
# File Configuration
###########################################
db_recovery_file_dest=+RECOVERY
db_recovery_file_dest_size=4070572032

###########################################
# Miscellaneous
###########################################
compatible=11.2.0.0.0
diagnostic_dest=/u01/app/oracle

###########################################
# Processes and Sessions
###########################################
processes=2200

###########################################
# Security and Auditing
###########################################
audit_file_dest=/u01/app/oracle/admin/wcb-sav/adump
audit_trail=db
remote_login_passwordfile=EXCLUSIVE
###########################################
# Shared Server
###########################################
dispatchers="(PROTOCOL=TCP) (SERVICE=wcbXDB)


Revision 1.0, 04/15/2011

Oracle traces including 10053 and 10046 trace

Running various oracle traces including 10053 and 10046 trace file   10053 trace file generation trace 10053 enables you to ...