The Exadata products address the three key dimensions relating
performance.
- More pipes to deliver more data faster
- Wider pipes that provide extremely high bandwidth
- Ship just the data required to satisfy SQL requests
There are two members of the Oracle
Exadata product family.
- HP Oracle
Exadata Storage Server.
- HP Oracle
Database Machine.
- Also known as Exadata
Cell
- Running the Exadata Storage Server Software provided
by Oracle.
-
- The Exadata cell comes-preconfigured with:
- two Intel 2.66 Ghz quad-core processors,
- twelve disks connected to a smart array storage
controller with 512K of non-volatile cache,
- 8 GB memory,
- dual port InfiniBand(16 Gigabits of bandwidth)
connectivity,
- management card for remote access,
- redundant power supplies,
- all the software preinstalled, and
- Can be installed in a typical 19-inch rack.
-
- Two versions of
exadata cell are offered:
- The first is based on
- 450 GB Serial Attached SCSI (SAS) drives.
- 1.5 TB of
uncompressed user data capacity, and
- up to 1
GB/second of data bandwidth.
- The second version of the Exadata cell is based on-
- 1 TB Serial Advanced Technology Attachment (SATA)
drives and
- 3.3 TB of uncompressed user data capacity, and
- Up to 750
MB/second data bandwidth.
-
- No cell-to-cell communication is ever done or required
- A rack can contain up to eighteen Exadata
cells.
- The peak data throughput for the SAS based
configuration would be 18 GB/second.
- If additional storage capacity is required, add more
racks with Exadata cells to scale to any required bandwidth or capacity level.
- Once a new rack is connected, the new Exadata disks
can be discovered by the Oracle Database and made available.
- Data is mirrored across cells to ensure
that the failure of a cell will not cause loss of data, or inhibit data
accessibility.
-
- SQL processing is offloaded from the database server
to the Oracle Exadata Storage Server.
-
- All features of the Oracle database are fully
supported with Exadata.
- Exadata
works equally well with single-instance or RAC.
- Functionality like Data Guard, Recovery Manager
(RMAN), Streams, and other database tools are administered the same, with or
without Exadata.
-
- Users and database administrator leverage the same
tools and knowledge they are familiar with today because they work just as they
do with traditional non-Exadata storage.
- Both Exadata
and non-Exadata storage may be concurrently used for database storage to
facilitate migration to, or from, Exadata storage.
-
-
Exadata also has been integrated with the Oracle Enterprise Manager (EM) Grid Control by installing an
Exadata plug-in to the existing EM system
-
The Oracle Storage Server Software resident in the Exadata cell runs
under OEL. OEL is accessible in a restricted mode to administer and manage the
Exadata cell.
-
The Oracle Database 11g has been significantly enhanced to take
advantage of Exadata storage.
-
The Exadata software is optimally divided between the database server
and Exadata cell.
-
Two versions of Database Machine –
-
Database Machine Full Rack or
-
Database Machine Half Rack –
-
-
Fourteen Exadata Storage
Servers
(either SAS or SATA)
-
Eight HP ProLiant DL360 G5 Oracle Database 11g database servers
(dual-socket quad-core Intel® 2.66 Ghz processors), with 32 GB RAM, four 146 GB
SAS drives, dual port InfiniBand Host Channel Adapter (HCA), dual 1 Gb/second
Ethernet ports, and redundant power supplies)
-
Ethernet switch for communication from the Database Machine to database
clients.
-
Keyboard, Video or Visual Display Unit, Mouse (KVM) hardware for local
administration of the system
-
-
Using SAS-based Exadata storage cells up to 21 TB data capacity and up
to 14 GB/second of I/O bandwidth.
-
Using SATA-based Exadata storage cells up to 46 TB capacity and up to
10.5 GB/second of I/O bandwidth.
-
Half of all components of Full Rack configuration.
-
The database server and
Exadata Storage Server Software communicate using the iDB – the Intelligent Database protocol. iDB is implemented in the
database kernel and transparently maps database operations to Exadata-enhanced
operations.
-
iDB is used to ship SQL
operations down to the Exadata cells for execution and to return query result
sets to the database kernel. Instead of returning database blocks Exadata cells
return only the rows and columns that satisfy the SQL query.
-
-
Like existing I/O protocols,
iDB can also directly read and write ranges of bytes to and from disk so when
offload processing is not possible Exadata operates like a traditional storage
device for the Oracle database.
-
CELLSRV (Cell Services) is
the primary component of the Exadata software running in the cell and provides
the majority of Exadata storage services. CELLSRV
is multi-threaded software that communicates with the database instance on the
database server, and serves blocks to databases based on the iDB
protocol. It provides
-
the advanced SQL offload capabilities,
-
serves Oracle blocks when SQL offload processing is not possible, and
-
Implements the DBRM I/O resource management functionality to meter out
I/O bandwidth to the various databases and consumer groups issuing I/O.
-
If a cell dies during a smart
scan, the uncompleted portions of the smart scan are transparently routed to
another cell for completion.
-
SQL
EXPLAIN PLAN shows when Exadata smart scan is used.
-
The
Oracle Database and Exadata cooperatively execute various SQL statements.
-
Two
other database operations that are offloaded to Exadata are incremental
database backups and tablespace creation.
-
The
Database Resource Manager (DBRM) feature in Oracle Database 11g has been
enhanced for use with Exadata. DBRM lets the
user define and manage intra and inter-database I/O bandwidth in addition to
CPU, undo, degree of parallelism, active sessions, and the other resources it
manages.
-
An
Exadata administrator can create a resource plan that specifies how I/O requests should be
prioritized. This
is accomplished by putting the different types of work into service groupings
called Consumer
Groups.
Consumer groups can be defined by username, client program name, function, or
length of time the query has been running. The user can set a hierarchy of
which consumer group gets precedence in I/O resources and how much of the I/O
resource is given to each consumer group.
-
Automatic Storage Management
(ASM) is used to manage the storage in the Exadata cell.
-
ASM
provides data protection against drive and cell failures
-
A Cell Disk is the virtual representation of the physical disk, minus
the System Area LUN (if present)
-
A Cell
Disk is represented by a single LUN, which is created and managed automatically by the Exadata software when the
physical disk is discovered.
-
Cell
Disks can be further virtualized into one or more Grid Disks.
-
It is also possible to partition a Cell Disk into multiple Grid Disk slices.
-
Example:
o
Once the Cell Disks and Grid
Disks are configured, ASM disk groups are defined across the Exadata
configuration.
o
Two ASM disk groups are defined,
one across the “hot” grid disks, and a second across the “cold” grid disks.
o
All of the “hot” grid disks are
placed into one ASM disk group and all the “cold” grid disks are placed in a
separate disk group.
o
When the data is loaded into the
database, ASM will evenly distribute the data and I/O within the disk groups.
o
ASM
mirroring can be activated for these disk groups to protect against disk
failures for both, either, or neither of the disk groups.
o
Mirroring
can be turned on or off independently for each of the disk groups.
-
Lastly,
to protect against the failure of an entire Exadata cell, ASM failure groups
are defined. Failure groups ensure that mirrored ASM extents are placed on
different Exadata cells.
-
A single database can be
partially stored on Exadata storage and partially on traditional storage
devices.
-
Tablespaces can reside on Exadata
storage, non-Exadata storage, or a combination of the two, and is transparent
to database operations and applications. But to benefit from the Smart Scan
capability of Exadata storage, the entire tablespace must reside on Exadata
storage. This co-residence and co-existence is a key feature to enable online
migration to Exadata storage.
-
Online migration can be done if
the existing database is deployed on ASM and is using ASM redundancy.
-
Migration can be done using
Recovery Manager (RMAN)
-
Data Guard can also be used to
facilitate a migration.
-
All these approaches provide a
built in safety net as you can undo the migration very gracefully if unforeseen
issues arise.
-
With
the Exadata architecture, all single points of failure are eliminated. Familiar
features such as mirroring, fault isolation, and protection against drive and
cell failure have been incorporated into Exadata to ensure continual
availability and protection of data.
-
Hardware
Assisted Resilient Data (HARD) built into Exadata
o
Designed to prevent data
corruptions before they happen.
o
Provides higher levels of protection
and end-to-end data validation for your data.
o
Exadata
performs extensive validation of the data including checksums, block locations,
magic numbers, head and tail checks, alignment errors, etc.
o
Implementing these data
validation algorithms within Exadata will prevent corrupted data from being
written to permanent storage.
o
Furthermore,
these checks and protections are provided without the manual steps required
when using HARD with conventional storage.
-
Flashback
o
The Flashback feature works in
Exadata the same as it would in a non-Exadata environment.
-
Recovery
Manager (RMAN)
o
All
existing RMAN scripts work unchanged in the Exadata environment
No comments:
Post a Comment