|
这个是官方文档,主要解决在EL5下面安装会出现的三个问题。
When installing 10gR2 RAC on Oracle Enterprise Linux 5 or RHEL5 or SLES10 there are three issues that users must be aware of.
Issue#1: To install 10gR2, you must first install the base
release, which is 10.2.0.1. As these version of OS are newer, you should use the
following command to invoke the installer:
Issue#2: At end of root.sh on the last
node vipca will fail to run with the following error:
Also, srvctl will show similar output if workaround below is not
implemented.
Issue#3: After working around Issue#2 above, vipca will fail to run with the following error if the VIP IP's are in a non-routable range [10.x.x.x, 172.(16-31).x.x or 192.168.x.x]:
Cause
These releases of the Linux kernel fix an old bug in the Linux threading that Oracle worked around using LD_ASSUME_KERNEL settings in both vipca and srvctl, this workaround is no longer valid on OEL5 or RHEL5 or SLES10 hence the failures.
Solution
To workaround Issue#2 above, edit vipca (in the CRS bin directory on all nodes) to undo the setting of LD_ASSUME_KERNEL. After the IF statement around line 120 add an unset command to ensure LD_ASSUME_KERNEL is not set as follows:
Similarly for srvctl (in both the CRS and, when installed, RDBMS and ASM bin directories on all nodes), unset LD_ASSUME_KERNEL by adding one line, around line 168 should look like this:
Note that we are explicitly unsetting LD_ASSUME_KERNEL and
not merely commenting out its setting to handle a case where the user has it set
in their environment (login shell).
To workaround issue#3 (vipca failing on non-routable VIP IP ranges, manually or during root.sh), if you still have the OUI window open, click OK and it will create the "oifcfg" information, then cluvfy will fail due to vipca not completed successfully, skip below in this note and run vipca manually then return to the installer and cluvfy will succeed. Otherwise you may configure the interfaces for RAC manually using the oifcfg command as root, like in the following example (from any node):
The goal is to get the output of "oifcfg getif" to include both public and cluster_interconnect interfaces, of course you should exchange your own IP addresses and interface name from your environment. To get the proper IPs in your environment run this command:
Running VIPCA:
After implementing
the above workaround(s), you should be able invoke vipca (as root, from
last node) manually and configure the VIP IPs via the GUI
interface.
Make sure the DISPLAY environment variable is set correctly and you can open
X-clock or other X applications from that shell.
Once vipca completes running, all the Clusterware resources (VIP, GSD, ONS)
will be started, there is no need to re-run root.sh
since vipca is the last step in root.sh.
To verify the Clusterware resources are running correctly:
You may now proceed with the rest of the RAC
installation.