首页 > Linux操作系统 > Linux操作系统 > How To Configure SSH for a RAC Installation [ID 300548.1]

How To Configure SSH for a RAC Installation [ID 300548.1]

原创 Linux操作系统 作者:linger_52102 时间:2011-08-26 09:13:30 0 删除 编辑

How To Configure SSH for a RAC Installation [ID 300548.1]


Modified 17-FEB-2011     Type HOWTO     Status PUBLISHED


In this Document

Applies to:

Oracle Server - Enterprise Edition - Version: to - Release: 10.1 to 11.2
Oracle Server - Enterprise Edition - Version: to   [Release: 10.1 to 11.2]
Information in this document applies to any platform.
Reviewed 22-Oct-2008


This document will explain how to configure SSH, which is required to run a RAC installation. Following the instructions in the installation guide are also correct, but sometimes this will not work, although the reason for that isn't clear. Therefore after some investigation it seems to be that the steps below will work too.

Starting with 11gR2 the Oracle Universal Installer the ssh setup can be done by using the  'SSH Connectivity' button.


To configure SSH you need to perform. the following steps on each node in the cluster.

$ cd $HOME
$ mkdir .ssh
$ chmod 700 .ssh
$ cd .ssh
$ ssh-keygen -t rsa

Now accept the default location for the key file
Enter and confirm a passphrase. (you can also press enter twice).

$ ssh-keygen -t dsa

Now accept the default location for the key file
Enter and confirm a passphrase. (you can also press enter twice).

$ cat *.pub >> authorized_keys. (nodeX could be the nodename to differentiate files later)

Now do the same steps on the other nodes in the cluster.
When all those steps are done on the other nodes, start to copy the authorized_keys. to all the nodes into $HOME/.ssh/

For example if you have 4 nodes you will have after the copy in the .ssh 4 files with the name authorized_keys.

Then on EACH node continue the configuration of SSH by doing the following:

$ cd $HOME/.ssh
$ cat *.node* >> authorized_keys
$ chmod 600 authorized_keys

NOTE: ALL public keys must appear in ALL authorized_keys files, INCLUDING the LOCAL public key for each node.

To test that everything is working correct now execute the commands

$ ssh date

So on example in a 4 node environment:

$ ssh node1 date
$ ssh node2 date
$ ssh node3 date
$ ssh node4 date

Repeat this 4 times on each node, including ssh back to the node itself. The nodeX is the hostname of the node.

The first time you will be asked to add the node to a file called 'known_hosts' this is correct and answer the question with 'yes'. After that when correctly configured you must be able to get the date returned and you will not be prompted for a password.

Note: the above will work if during RSA and DSA configuration no password was provided. If you provide a password then you need to do 2 addition steps.

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add

These statements will inform. the ssh agent to add the keys to the shell used. So when a new shell is started you need to repeat the last to statements to make sure ssh is working properly.

ssh will not allow passwordless access if permissions on the home directory of the account you are using  allow write access for everyone.

You will also see permission denied error when the permissions on $HOME are 777 or 775.

Disable banner (/etc/banner) on all cluster nodes when you

  • run clusterverify (cluvfy, runcluvfy)
  • install software
  • patch the system

Please work with your System Administrator or contact your Operating System support in case you still have problems setting up ssh.


NOTE:264063.1 - Public node is not available and PRKC-1044 Reported by OUI Cluster Configuration

来自 “ ITPUB博客 ” ,链接:,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录


  • 博文量
  • 访问量