Oracle Database 12c: Cloud computing with multitenant architecture

Pluggable Database

Flex Cluster/Flex ASM

Also new to version 12c is the concept of Flex ASM (Automatic Storage Management). This feature enables the use of an ASM instance that does not run locally on the server. The data is transmitted over a network (Ethernet or InfiniBand). In extreme cases, Flex ASM enables consolidation and separation of database storage by establishing a central storage cluster that is accessible to all other databases in the enterprise. Flex ASM is a prerequisite for another new feature that increases the number of nodes, and thus the computing power available in a RAC, without each node needing access to the shared storage: Flex Cluster.

Flex clusters consist of hub-and-leaf nodes. A hub node is a node with direct access (e.g., via LAN) to storage. In comparison, a leaf node only has indirect access to the storage via Flex ASM, but it is still a full member of the cluster.

Application Continuity

A few years ago, the introduction of Transparent Application Failover (TAF) revolutionized contemporary cluster concepts by allowing SELECT statements canceled by a node failure to continue transparently to the user on a remaining node (under certain conditions). In version 12c, Oracle extends this concept to all transactions, dubbing this feature Application Continuity .

In the best case, a failure of one node in the RAC goes completely unnoticed by the user – no matter what kind of transaction they just performed. However, this requires adjustments to the client and is linked to the use of certain classes and libraries. Currently JDBC Thin, UCP, and WebLogic are supported. Work is in progress on support for PeopleSoft, Siebel, and Oracle Fusion.

Little Gems

Besides all the big new features, some smaller useful extensions are now included:

  • You can now move a data file online.
  • Besides the known SYSDBA and SYSOPER roles, additional roles exist for more granular assignment of existing permissions: SYSBACKUPDBA for backup and recovery, SYSDGDBA for Data Guard, and SYSKMDBA for wallet management.
  • Thanks to the far sync standby option, Data Guard has the ability to support synchronous replication over distances greater than 40 to 100km; this involves a local hub synchronously accepting the data and then asynchronously transferring the data to the remote standby site. The switchover itself occurs between the primary and remote standbys – the local hub is not involved.
  • Cancelled switchovers can now be resumed.
  • DML on temporary tables in a standby database does not cause a redo and allows data to be stored in temporary tables in a standby database.
  • Sequences of the primary DB can be used in standby mode.
  • Database upgrades without downtime are (almost) automatic.
  • The size of the PGA can be limited by the PGA_AGGREGATE_SIZE_LIMIT parameter.
  • The patch inventory can be queried directly from within the database.
  • ACFS (ASM Cluster File System) supports storage of all data files. ACFS snapshots can be writable.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=