Quantcast
Channel: Archives des Installation - dbi Blog
Viewing all 52 articles
Browse latest View live

Part 2 – vagrant up – get your Oracle infrastructure up an running

$
0
0

Last week in the first part of this blog we have seen a short introduction how to setup an Oracle Infrastructure with Vagrant and Ansible. Remember all the files for this example are available here https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
Get the example code:

git clone https://github.com/nkadbi/oracle-db-12c-vagrant-ansible

If you have prepared your environment with Ansible, Vagrant and Oracle Virtual Box installed – and provided the Oracle software zip files –
than you can just start to build your Test Infrastructure with the simple callvagrant up
cleanup is also easy- stop the vagrant machines and deletes all traces:
vagrant destroy
How does this work ?
vagrant up starts Vagrant which will setup two virtual servers using a sample box with CentOS 7.2.
When this has been finished Vagrant calls Ansible for provisioning which configures the linux servers, installs the Oracle software and creates your databases on the target servers in parallel.

Vagrant configuration
All the configuration for Vagrant is in one file called Vagrantfile
I used a box with CentOS 7.2 which you can find among other vagrant boxes here https://atlas.hashicorp.com/search
config.vm.box = "boxcutter/centos72" If you start vagrant up the first time it will download the vagrant box
$ vagrant up

Bringing machine 'dbserver1' up with 'virtualbox' provider...
Bringing machine 'dbserver2' up with 'virtualbox' provider...
==> dbserver1: Box 'boxcutter/centos72' could not be found. Attempting to find and install...
dbserver1: Box Provider: virtualbox
dbserver1: Box Version: >= 0
==> dbserver1: Loading metadata for box 'boxcutter/centos72'
dbserver1: URL: https://atlas.hashicorp.com/boxcutter/centos72
==> dbserver1: Adding box 'boxcutter/centos72' (v2.0.21) for provider: virtualbox
dbserver1: Downloading: https://atlas.hashicorp.com/boxcutter/boxes/centos72/versions/2.0.21/providers/virtualbox.box
==> dbserver1: Successfully added box 'boxcutter/centos72' (v2.0.21) for 'virtualbox'!
==> dbserver1: Importing base box 'boxcutter/centos72'...

I have chosen a private network for the virtual servers and use vagrant hostmanager plugin to take care of the /etc/hosts files on all guest machines (and optionally your localhost)
you can add this plugin to vagrant with:
vagrant plugin install vagrant-hostmanager
The corresponding part in the Vagrantfile will look like this:
config.hostmanager.enabled = true
config.hostmanager.ignore_private_ip = false # include private IPs of your VM's
config.vm.hostname = “dbserver1”
config.vm.network "private_network", ip: "192.168.56.31"

ssh Configuration
The Vagrant box comes already with ssh key configuration and- if security does not matter in your demo environment – the easiest way to configure ssh connection to your guest nodes is to use the same ssh key for all created virtual hosts.
config.ssh.insert_key = false # Use the same insecure key provided from box for each machine After bringing up the virtual servers you can display the ssh settings:
vagrant ssh-config The important lines from the output are:
Host dbserver1
HostName 127.0.0.1
User vagrant
Port 2222
IdentityFile /home/user/.vagrant.d/insecure_private_key
You should be able to reach your guest server without password with user vagrant
vagrant ssh dbserver1
Than you can switch to user oracle ( password = welcome1 ) or root (default password for vagrant boxes vagrant) su - oracle or directly connect with ssh ssh vagrant@127.0.0.1 -p 2222 -i /home/user/.vagrant.d/insecure_private_key
Virtual Disks
I added additional virtual disks because I wanted to separate data file destination from fast recovery area destination. # attach disks only localy
if ! File.exist?("dbserver#{i}_disk_a.vdi") # create disks only once
v.customize ['createhd', '--filename', "dbserver#{i}_disk_a.vdi", '--size', 8192 ] v.customize ['createhd', '--filename', "dbserver#{i}_disk_b.vdi", '--size', 8192 ] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_a.vdi"] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_b.vdi"] end # create disks only once

Provisioning with Ansible
At the end of the Vagrantfile provisioning with Ansible is called.
N = 2
(1..N).each do |i| # do for each server i
...
if i == N
config.vm.provision "ansible" do |ansible| # vm.provisioning
#ansible.verbose = "v"
ansible.playbook = "oracle-db.yml"
ansible.groups = { "dbserver" => ["dbserver1","dbserver2"] }
ansible.limit = 'all'
end # end vm.provisioning
end
end
To prevent the Ansible provisioning to start before all servers have been setup by Vagrant, I included the condition if i == N , where N is the number of desired servers.

Ansible Inventory
The Ansible Inventory is a collection of guest hosts against which Ansible will work.
You can either put the information in an inventory file or let Vagrant create an Inventory file for you. Vagrant does this if you did not specify any inventory file.
To enable Ansible to connect to the target hosts without password Ansible has to know the ssh key provided by the vagrant box.
Example Ansible Inventory:
# Generated by Vagrant
dbserver2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
dbserver1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
[dbserver] dbserver1
dbserver2
You can see that the inventory created by Vagrant presents the necessary information to Ansible to connect to the targets and has also defined the group dbserver which includes the server dbserver1 and dbserver2.

Ansible configuration
tell Ansible where to find the inventory in the ansible.cfg.
nocows=1
hostfile = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
host_key_checking = False

Ansible Variables
In this example I have put the general variables for all servers containing an Oracle Database into this file:
group_vars/dbserver
The more specific variables including variables used to create the database like the database name, character set
can be adapted individual for each server:
host_vars/dbserver1,host_vars/dbserver2

Ansible Playbook
The Ansible playbook is a simple textfile written in YAML syntax, which is easy readable.
Our playbook oracle-db.yml has only one play called “ Configure Oracle Linux 7 with Oracle Database 12c” which will be applied on all servers belonging to the group dbserver. In my example Vagrant creates the vagrant inventory and initiates the play of the playbook but you can also start it stand-alone or repeat it if you want.
ansible-playbook oracle-db.yml
This is the whole playbook, to configure the servers and install Oracle Databases:
$cat oracle-db.yml
---
- name: Configure Oracle Linux 7 with Oracle Database 12c
hosts: dbserver
become: True
vars_files:
# User Passwords hashed are stored here:
- secrets.yml
roles:
- role: disk_layout
- role: linux_oracle
- role: oracle_sw_install
become_user: '{{ oracle_user }}'
- role: oracle_db_create
become_user: '{{ oracle_user }}'

Ansible roles
To make the playbook oracle-db.yml lean and to be more flexible I have split all the tasks into different roles.This makes it easy to reuse parts of the playbook or skip parts. For example if you only want to install the oracle software on the server, but do not want to create databases you can just delete the role oracle_db_create from the playbook.
You (and Ansible ) will find the files containing the tasks for a role in the directory roles/my_role_name/main.yml.
There can be further directories. The default directory structure looks like below. If you want to create a new role you can even create the directory structure by using ansible-galaxy. Ansible Galaxy is Ansible’s official community hub for sharing Ansible roles. https://galaxy.ansible.com/intro

# example to create the directory structure for the role "my_role_name"
ansible-galaxy init my_role_name


# default Ansible role directory structure
roles/
my_role_name/
defaults/
files/
handlers/
meta/
tasks/
templates/
vars/

Ansible Modules
Ansible will run the tasks described in the playbook on the target servers by invoking Ansible Modules.
This Ansible Web Page http://docs.ansible.com/ansible/list_of_all_modules.html shows information about Modules ordered by categories.
You can also get information about all the Ansible modules from command line:

# list all modules
ansible-doc --list
# example to show documentation about the Ansible module "copy"
ansible-doc copy

One Example:
To install the oracle software with response file I use the Ansible module called “template”. Ansible uses Jinja2, a templating engine for Python.
This makes it very easy to design reusable templates. For example Ansible will replace {{ oracle_home }} with the variable, which I have defined in group_vars/dbserver, and than copies the response file to the target servers:

Snipped from the Jinja2 template db_install.rsp.j2

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#-------------------------------------------------------------------------------
ORACLE_HOME={{ oracle_home }}

Snipped from roles/oracle_sw_install/tasks/main.yml

- name: Gerenerate the response file for software only installation
template: src=db_install.rsp.j2 dest={{ installation_folder }}/db_install.rsp

Ansible Adhoc Commands – Some Use Cases
Immediately after installing Ansible you already can use Ansible to gather facts from your localhost which will give you a lot of information:
ansible localhost -m setup
Use Ansible adhoc command with module ping to check if you can reach all target servers listed in your inventory file:

$ ansible all -m ping
dbserver2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
dbserver1 | SUCCESS => {
"changed": false,
"ping": "pong"
}

File transfer – spread a file to all servers in the group dbserver
ansible dbserver -m copy -b -a "src=/etc/hosts dest=/etc/hosts"

Conclusion
With the open source tools Vagrant and Ansible you can easily automate the setup of your infrastructure.
Even if you do not want to automate everything, Ansible still can help you with your daily work if you want to check or apply something on several servers.
Just group your servers in an inventory and run an Ansible Adhoc Command or write a small playbook.

Please keep in mind that this is a simplified example for an automated Oracle Database Installation.
Do not use this example for productive environments.

 

Cet article Part 2 – vagrant up – get your Oracle infrastructure up an running est apparu en premier sur Blog dbi services.


Naming of archivelog files with non existing top level archivelog directory

$
0
0

In Oracle 12.2 an archive log directory is accepted, if top level directory does not exist:
oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/
total 2267920
drwxr-xr-x. 2 oracle dba        96 Dec  6 05:36 arch ...

Now database accepts this non existing archivelog destination:
SQL> alter system set log_archive_dest_3='LOCATION=/u02/oradata/DMK/arch/arch2';
System altered.

But not this:
SQL> alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4';
alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16032: parameter LOG_ARCHIVE_DEST_4 destination string cannot be translated
ORA-07286: sksagdi: cannot obtain device information.
Linux-x86_64 Error: 2: No such file or directory

Log file format is set as following:
SQL> show parameter log_archive_format;
 
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_format                   string      %t_%s_%r.dbf
SQL>

 

Now let’s see how archive log files look like in log_archive_dest_3:
oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/arch/arch2*
-rw-r-----. 1 oracle dba 3845120 Dec  6 05:36 /u02/oradata/DMK/arch/arch21_5_960106002.dbf

So Oracle just adds the non existing top level directory to beginning of archivelog filename.

 

Cet article Naming of archivelog files with non existing top level archivelog directory est apparu en premier sur Blog dbi services.

ODA X7-2S/M 12.2.1.2.0: update-repository fails after re-image

$
0
0

While playing with a brand new ODA X7-2M, I faced a strange behaviour after re-imaging the ODA with the latest version 12.2.1.2.0. Basically after re-imaging and doing the configure-firstnet the next step is to import the GI clone in the repository before creating the appliance. Unfortunately this command fails with an error DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070. Why not having a look on how to fix it…

First of all doing a re-image is really straight forward and work very well. I simply access to the ILOM remote console to attach the ISO file for the ODA, in this case the patch 23530609 from the MOS, and restart the box on the CDROM. After approx. 40 minutes you have a brand new ODA running the latest release.

Of course instead re-imaging, I could “simply” update/upgrade the DCS agent to the latest version. Let say that I like to start from a “clean” situation when deploying a new environment and patching a not installed system sound a bit strange for me ;-)

So once re-imaged the ODA is ready for deployment. The first step is to configure the network that I can SSH to it and go ahead with the create appliance. This takes only 2 minutes using the command configure-firstnet.

The last requirement before running the appliance creation is to import the GI Clone, here the patch p27119393_122120, in the repository. Unfortunately that’s exactly where the problem starts…

Screen Shot 2018-02-19 at 12.11.23

Hmmm… I can’t get it in the repository due to a strange hand shake error. So I will check if the web interface is working at least (…of course using Chrome…)

Screen Shot 2018-02-19 at 12.11.14

Same thing here, it is not possible to come in the web interface at all.

While searching a bit for this error, we finally landed in the Know Issue chapter of the ODA 12.2.1.2.0 Release Note, which sounds promising. Unfortunately none of the listed error did really match to our case. However doing a small search in the page for the error message pointed us the following case out:

Screen Shot 2018-02-19 at 12.12.28

Ok the error is ODA X7-2HA related, but let’s give a try.

Restart-DCS

Once DCS is restarted, just re-try the update-repository

Import-GIClone

Here we go! The job has been submitted and the GI clone is imported in the repository :-)

After that the CREATE APPLIANCE will run like a charm.

Hope it helped!

 

 

Cet article ODA X7-2S/M 12.2.1.2.0: update-repository fails after re-image est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Things to know, binaries & JMS installation

$
0
0

Documentum introduced some time ago already the silent installations for its software. The way to use this changed a little bit but it seems they finally found their way. This blog will be the first of a series to present how to work with the silent installations on Documentum because it is true that it is not really well documented and most probably not much used at the moment.

We are using this where possible for our customers and it is true that it is really helpful to avoid human errors and install components more quickly. Be aware that this isn’t perfect! There are some parameters with typos, some parameters that are really not self-explanatory, so you will need some time to understand everything but, in the end, it is still helpful.

Using the silent installation is a first step but you will still need a lot of manual interventions to execute these as well as actually making your environment working. I mean it only replaces the GUI installers so everything you were doing around that is still needed (preparation of files/folders/environment, custom start/stop scripts, service setup, Java Method Server (JMS) configuration, Security Baselines, SSL setup, aso…). That’s why we also developed internally scripts or playbooks (Ansible for example) to perform everything around AND use the Documentum silent installations. In this blog and more generally in this series, I will only talk about the silent installations coming from Documentum.

Let’s start with the basis:

  1. Things you need to know
  2. Documentum Content Server installation (binaries & JMS)

 

1. Things you need to know

  • Each and every component installation needs its own properties file that is used by the installer to know what to install and how to do it, that’s all you need to do.
  • As I mentioned above, there are some typos in a few parameters coming from the properties files like “CONGINUE” instead of “CONTINUE”. These aren’t errors in my blogs, the parameters are really like that. All the properties files I’m showing here have been tested and validated in a lot of environments, including PROD ones in High Availability.
  • To know more about the silent installation, you can check the installation documentation. There isn’t much to read about it but still some potentially interesting information.
  • The Documentum documentation does NOT contain any description of the parameters you can/should use, that’s why I will try in each blogs to describe them as much as possible.
  • You can potentially do several things at once using a single silent properties file, the only restriction for that is that it needs to use the same installer. Therefore, you could install a docbroker/connection broker, a docbase/repository and configure/enable a licence using a single properties file but you wouldn’t be able to do the silent installation of the binaries as well because it needs another installer. That’s definitively not what I’m doing because I find it messy, I really prefer to separate things, so I know I’m using only the parameters that I need for a specific component and nothing else.
  • There are examples provided when you install Documentum. You can look at the folder “$DM_HOME/install/silent/templates” and you will see some properties file. In these files, you will usually find most of the parameters that you can use but from what I remember, there are a few missing. Be aware that some files are for Windows and some are for Linux, it’s not always the same because some parameters are specific to a certain OS:
    • linux_ files are for Linux obviously
    • win_ files are for Windows obviously
    • cfs_ files are for a CFS/Remote Content Server installation (to provide High Availability to your docbases/repositories)
  • If you look at the folder “$DM_HOME/install/silent/silenttool”, you will see that there is a utility to generate silent files based on your current installation. You need to provide a silent installation file for a Content Server and it will generate for you a CFS/Remote CS silent installation file with most of the parameters that you need. Do not 100% rely on this file, there might still be some parameters missing but present ones should be the correct ones. I will write a blog on the CFS/Remote CS as well, to provide an example.
  • You can generate silent properties file by running the Documentum installers with the following command: “<installer_name>.<sh/bin> -r <path>/<file_name>.properties”. This will write the parameters you selected/enabled/configured into the <file_name>.properties file so you can re-use it later.
  • To install an additional JMS, you can use the jmsConfig.sh script or jmsStandaloneSetup.bin for an IJMS (Independent JMS – Documentum 16.4 only). It won’t be in the blogs because I’m only showing the default one created with the binaries.
  • The following components/features can be installed using the silent mode (it is possible that I’m missing some, these are the ones I know):
    • CS binaries + JMS
    • JMS/IJMS
    • Docbroker/connection broker
    • Licences
    • Docbase/repository (CS + CFS/RCS + DMS + RKM)
    • D2
    • Thumbnail

 

2. Documentum Content Server installation (binaries & JMS)

Before starting, you need to have the Documentum environment variables defined ($DOCUMENTUM, $DM_HOME, $DOCUMENTUM_SHARED), that doesn’t change. Once that is done, you need to extract the installer package (below I used the package for a CS 7.3 on Linux with an Oracle DB):

[dmadmin@content_server_01 ~]$ cd /tmp/dctm_install/
[dmadmin@content_server_01 dctm_install]$ tar -xvf Content_Server_7.3_linux64_oracle.tar
[dmadmin@content_server_01 dctm_install]$
[dmadmin@content_server_01 dctm_install]$ chmod 750 serverSetup.bin
[dmadmin@content_server_01 dctm_install]$ rm Content_Server_7.3_linux64_oracle.tar

 

Then prepare the properties file:

[dmadmin@content_server_01 dctm_install]$ vi CS_Installation.properties
[dmadmin@content_server_01 dctm_install]$ cat CS_Installation.properties
### Silent installation response file for CS binary
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
APPSERVER.SERVER_HTTP_PORT=9080
APPSERVER.SECURE.PASSWORD=adm1nP4ssw0rdJMS

### Common parameters
COMMON.DO_NOT_RUN_DM_ROOT_TASK=true

[dmadmin@content_server_01 dctm_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • APPSERVER.SERVER_HTTP_PORT: The port to be used by the JMS that will be installed
  • APPSERVER.SECURE.PASSWORD: The password of the “admin” account of the JMS. Yes, you need to put all passwords in clear text in the silent installation properties files so add it just before starting the installation and remove them right after
  • COMMON.DO_NOT_RUN_DM_ROOT_TASK: Whether or not you want to run the dm_root_task in the silent installation. I usually set it to true, so it is NOT executed because the Installation Owner I’m using do not have root accesses for security reasons
  • On Windows, you would need to provide the Installation Owner’s password as well and the path you want to install Documentum on ($DOCUMENTUM). On linux, the first one isn’t needed and the second one needs to be in the environment before starting.
  • You could also potentially add more properties in this file: SERVER.LOCKBOX_FILE_NAMEx and SERVER.LOCKBOX_PASSPHRASE.PASSWORDx (where x is a number starting with 1 and incrementing in case you have several lockboxes). These parameters would be used for existing lockbox files that you would want to load. Honestly, these parameters are useless. You will anyway need to provide the lockbox information during the docbase/repository creation and you will need to specify if you want a new lockbox, an existing lockbox or no lockbox at all so specifying it here is kind of useless…

 

Once the properties file is ready, you can install the Documentum binaries and the JMS in silent using the following command:

[dmadmin@content_server_01 dctm_install]$ ./serverSetup.bin -f CS_Installation.properties

 

This conclude the first blog of this series about Documentum silent installations. Stay tuned for more soon.

 

Cet article Documentum – Silent Install – Things to know, binaries & JMS installation est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Docbroker & Licences

$
0
0

In a previous blog, I quickly went through the different things to know about the silent installations as well as how to install the CS binaries. Once the CS binaries are installed, you can install/configure a few more components. On this second blog, I will continue with:

  • Documentum docbroker/connection broker installation
  • Configuration of a Documentum licence

 

1. Documentum docbroker/connection broker installation

As mentioned in the previous blog, the examples provided by Documentum contain almost all possible parameters but for this section, only a very few of them are required. The properties file for a docbroker/connection broker installation is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbroker.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbroker.properties
### Silent installation response file for a Docbroker
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=false
SERVER.CONFIGURATOR.BROKER=true

### Docbroker parameters
SERVER.DOCBROKER_ACTION=CREATE
SERVER.DOCBROKER_PORT=1489
SERVER.DOCBROKER_NAME=Docbroker
SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.DOCBROKER_CONNECT_MODE=dual

### Common parameters
START_METHOD_SERVER=false
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.CONFIGURATOR.LICENSING: Whether or not you want to configure a licence using this properties file. Here since we just want a docbroker/connection broker, it is obviously false
  • SERVER.CONFIGURATOR.REPOSITORY: Whether or not you want to configure a docbase/repository. Same here, it will be false
  • SERVER.CONFIGURATOR.BROKER: Whether or not you want to configure a docbroker/connection broker. That’s the purpose of this properties file so it will be true
  • SERVER.DOCBROKER_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • SERVER.DOCBROKER_PORT: The port the docbroker/connection broker will listen to (always the native port)
  • SERVER.DOCBROKER_NAME: The name of the docbroker/connection broker to create/upgrade/delete
  • SERVER.PROJECTED_DOCBROKER_HOST: The hostname to use for the dfc.properties projection for this docbroker/connection broker
  • SERVER.PROJECTED_DOCBROKER_PORT: The port to use for the dfc.properties projection related to this docbroker/connection broker. It should obviously be the same as “SERVER.DOCBROKER_PORT”, don’t ask me why there are two different parameters for that…
  • SERVER.DOCBROKER_CONNECT_MODE: The connection mode to use for the docbroker/connection broker, it can be either native, dual or secure. If it is dual or secure, you have 2 choices:
    • Use the default “Anonymous” mode, which is actually not really secure
    • Use a real “SSL Certificate” mode, which requires some more parameters to be configured (and you need to have the keystore and truststore already available):
      • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbroker/connection broker
      • SERVER.DOCBROKER_KEYSTORE_FILE_NAME: The name of the p12 file that contains the keystore
      • SERVER.DOCBROKER_KEYSTORE_PASSWORD_FILE_NAME: The name of the password file that contains the password of the keystore
      • SERVER.DOCBROKER_CIPHER_LIST: Colon separated list of ciphers to be enabled (E.g.: EDH-RSA-AES256-GCM-SHA384:EDH-RSA-AES256-SHA)
      • SERVER.DFC_SSL_TRUSTSTORE: Full path and name of the truststore to be used that contains the SSL Certificate needed to trust the targets
      • SERVER.DFC_SSL_TRUSTSTORE_PASSWORD: The password of the truststore in clear text
      • SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE: Whether or not to use the Java truststore or the 2 above parameters instead
  • START_METHOD_SERVER: Whether or not you want the JMS to be re-started again once the docbroker/connection broker has been created. Since we usually create the docbroker/connection broker just before creating the docbases/repositories and since the docbases/repositories will anyway stop the JMS, we can leave it stopped there
  • MORE_DOCBASE: Never change this value, it should remain as false as far as I know
  • SERVER.CONGINUE.MORECOMPONENT: Whether or not you want to configure some additional components. Same as above, I would always let it as false… I know that the name of this parameter is strange but that’s the name that is coming from the templates… But if you look a little bit on the internet, you might be able to find “SERVER.CONTINUE.MORE.COMPONENT” as well… So which one is “correct”, which one isn’t is still a mystery for me. I’m using the first one but since I always set it to false, that doesn’t have any impact for me and I never saw any errors coming from the log files or anything.

 

Once the properties file is ready, you can install the docbroker/connection broker using the following command:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbroker.properties

 

That’s it, after a few seconds, the prompt will be returned and the docbroker/connection broker will be installed with the provided parameters.

 

2. Configuration of a Documentum licence

Once you have a docbroker/connection broker installed, you can configure/enable a certain amount of licences (actually you could have done it before). For this example, I will only enable the TCS but you can do it for all others too. The properties file for a licence configuration is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Licence.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Licence.properties
### Silent installation response file for a Licence
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=true
SERVER.CONFIGURATOR.REPOSITORY=false
SERVER.CONFIGURATOR.BROKER=false

### Licensing parameters
SERVER.TCS_LICENSE=DummyLicenceForTCS

### Common parameters
START_METHOD_SERVER=false
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$

 

A short description of these properties – compared to the above ones:

  • SERVER.CONFIGURATOR.LICENSING & SERVER.CONFIGURATOR.BROKER: This time, we will obviously set the broker to false and the licensing to true so we do not re-install another docbroker/connection broker
  • Licences:
    • SERVER.TCS_LICENSE: Licence string to enable the Trusted Content Services on this CS
    • SERVER.XHIVE_LICENSE: Licence string to enable the XML Store Feature
    • SERVER.AS_LICENSE: Licence string to enable the Archive Service
    • SERVER.CSSL_LICENSE: Licence string to enable the Content Storage Service Licence
    • aso… Some of these licences require more parameters to be added (XHIVE: “XHIVE.PAGE.SIZE”, “SERVER.ENABLE_XHIVE”, “SERVER.XHIVE_HOST”, aso…)

 

Once the properties file is ready, you can configure the licence(s) using the following command (same as previously, only the file changed):

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Licence.properties

 

It might make sense to enable some licences during the installation of a specific docbase/repository so then that would be up to you to think about this. In the above example, I only enabled the TCS so it is available to all docbases/repositories that will be installed on this Content Server. Therefore, it makes sense to do separately, before the installation of the docbases/repositories.

You now know how to install and configure a docbroker/connection broker as well as how to enable licences using the silent installation provided by Documentum

 

Cet article Documentum – Silent Install – Docbroker & Licences est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Docbases/Repositories

$
0
0

In previous blogs, we installed in silent the Documentum binaries as well as a docbroker (+licence(s) if needed). In this one, we will see how to install docbases/repositories and by that, I mean either a Global Registry (GR) repository or a normal repository.

As you all know, you will need a repository to be a GR and I would always recommend to setup a GR that isn’t used by the end-users (no real documents). That’s why I will split this blog into two: the installation of a GR and then, the installation of a normal repository that will be used by end-users. So, let’s get to it.

 

1. Documentum Global Registry repository installation

The properties file for a GR installation is as follow (it’s a big one):

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbase_GR.properties
### Silent installation response file for a Docbase (GR)
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=true
SERVER.CONFIGURATOR.BROKER=false

### Docbase parameters
SERVER.DOCBASE_ACTION=CREATE

common.use.existing.aek.lockbox=common.create.new
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS=false
SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_01.dbi-services.com

SERVER.DOCBASE_NAME=gr_docbase
SERVER.DOCBASE_ID=1010101
SERVER.DOCBASE_DESCRIPTION=Global Registry repository for silent install blog

SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.TEST_DOCBROKER=true
SERVER.CONNECT_MODE=dual

SERVER.USE_EXISTING_DATABASE_ACCOUNT=true
SERVER.INDEXSPACE_NAME=DM_GR_DOCBASE_INDEX
SERVER.DATABASE_CONNECTION=DEMODBNAME
SERVER.DATABASE_ADMIN_NAME=gr_docbase
SERVER.SECURE.DATABASE_ADMIN_PASSWORD=gr_d0cb4seP4ssw0rdDB
SERVER.DOCBASE_OWNER_NAME=gr_docbase
SERVER.SECURE.DOCBASE_OWNER_PASSWORD=gr_d0cb4seP4ssw0rdDB
SERVER.DOCBASE_SERVICE_NAME=gr_docbase

SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION=USE_THIS_REPOSITORY
SERVER.BOF_REGISTRY_USER_LOGIN_NAME=dm_bof_registry
SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=dm_b0f_reg1s7ryP4ssw0rd

### Common parameters
SERVER.ENABLE_XHIVE=false
SERVER.CONFIGURATOR.DISTRIBUTED_ENV=false
SERVER.ENABLE_RKM=false
START_METHOD_SERVER=false
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CS_Docbase_GR.properties
[dmadmin@content_server_01 ~]$

 

In the above commands, I didn’t put the SERVER.DOCUMENTUM_DATA and SERVER.DOCUMENTUM_SHARE into the file directly but I used sed commands to update the file later because I didn’t want to direct you to use a certain path for your installation like /app or /opt or /var or whatever… This choice is yours, so I just used sub-folders of $DOCUMENTUM and used this environment variable to set both parameters so you can choose which path you want for the Data and Share folder (the above is the default but you can set what you want).

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.CONFIGURATOR.LICENSING: Whether or not you want to configure a licence using this properties file. Here since we just want a docbase/repository, it is obviously false
  • SERVER.CONFIGURATOR.REPOSITORY: Whether or not you want to configure a docbase/repository. That’s the purpose of this properties file so it will be true
  • SERVER.CONFIGURATOR.BROKER: Whether or not you want to configure a docbroker/connection broker. Same as the licence, it will be false
  • SERVER.DOCBASE_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • common.use.existing.aek.lockbox: Whether to use an existing aek or create a new one. Possible values are “common.create.new” or “common.use.existing”. In this case, it is the first docbase/repository created so we are creating a new one. In case of migration/upgrade, you might want to use an existing one (after upgrading it) …
  • common.aek.passphrase.password: The password to be used for the AEK
  • common.aek.key.name: The name of the AEK key to be used. This is usually something like “CSaek”
  • common.aek.algorithm: The algorithm to be used for the AEK key. I would recommend the strongest one, if possible: “AES_256_CBC”
  • SERVER.ENABLE_LOCKBOX: Whether or not you want to use a Lockbox to protect the AEK key. If set to true, a lockbox will be created and the AEK key will be stored in it
  • SERVER.LOCKBOX_FILE_NAME: The name of the Lockbox to be used. This is usually something like “lockbox.lb”
  • SERVER.LOCKBOX_PASSPHRASE.PASSWORD: The password to be used for the Lockbox
  • SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS: Whether or not the “SERVER.DOCUMENTUM_DATA” and “SERVER.DOCUMENTUM_SHARE” are using a SAN or NAS path
  • SERVER.DOCUMENTUM_DATA: The path to be used to store the Documentum documents, accessible from all Content Servers which will host this docbase/repository
  • SERVER.DOCUMENTUM_SHARE: The path to be used for the share folder
  • SERVER.FQDN: The Fully Qualified Domain Name of the current host the docbase/repository is being installed on
  • SERVER.DOCBASE_NAME: The name of the docbase/repository to be created (dm_docbase_config.object_name)
  • SERVER.DOCBASE_ID: The ID of the docbase/repository to be created
  • SERVER.DOCBASE_DESCRIPTION: The description of the docbase/repository to be created (dm_docbase_config.title)
  • SERVER.PROJECTED_DOCBROKER_HOST: The hostname to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.PROJECTED_DOCBROKER_PORT: The port to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.TEST_DOCBROKER: Whether or not you want to test the docbroker/connection broker connection during the installation. I would recommand to always set this to true to be sure the docbase/repository is installed properly… If a docbroker/connection broker isn’t available, the installation will not be complete (DARs installation for example) but you will not see any error, unless you manually check the installation log…
  • SERVER.CONNECT_MODE: The connection mode of the docbase/repository to be used (dm_server_config.secure_connect_mode), it can be either native, dual or secure. If it is dual or secure, you have 2 choices:
    • Use the default “Anonymous” mode, which is actually not really secure
    • Use a real “SSL Certificate” mode, which requires some more parameters to be configured:
      • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbase/repository
      • SERVER.KEYSTORE_FILE_NAME: The name of the p12 file that contains the keystore
      • SERVER.KEYSTORE_PASSWORD_FILE_NAME: The name of the password file that contains the password of the keystore
      • SERVER.TRUST_STORE_FILE_NAME: The name of the p7b file that contains the SSL Certificate needed to trust the targets (from a docbase point of view)
      • SERVER.CIPHER_LIST: Colon separated list of ciphers to be enabled (E.g.: EDH-RSA-AES256-GCM-SHA384:EDH-RSA-AES256-SHA)
      • SERVER.DFC_SSL_TRUSTSTORE: Full path and name of the truststore to be used that contains the SSL Certificate needed to trust the targets (from a DFC/client point of view)
      • SERVER.DFC_SSL_TRUSTSTORE_PASSWORD: The password of the truststore in clear text
      • SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE: Whether or not to use the Java truststore or the 2 above parameters instead
  • SERVER.USE_EXISTING_DATABASE_ACCOUNT: Whether or not you want to use an existing DB Account or create a new one. I don’t like when an installer is requesting you full access to a DB so I’m usually preparing the DB User upfront with only the bare minimal set of permissions required and then using this account for the Application (Documentum docbase/repository in this case)
  • SERVER.INDEXSPACE_NAME: The name of the tablespace to be used to store the indexes (to be set if using existing DB User)
  • SERVER.DATABASE_CONNECTION: The name of the Database to connect to. This needs to be available on the tnsnames.ora if using Oracle, aso…
  • SERVER.DATABASE_ADMIN_NAME: The name of the Database admin account to be used. There is no reason to put anything else than the same as the schema owner’s account here… If you configured the correct permissions, you don’t need a DB admin account at all
  • SERVER.SECURE.DATABASE_ADMIN_PASSWORD: The password of the above-mentioned account
  • SERVER.DOCBASE_OWNER_NAME: The name of the schema owner’s account to be used for runtime
  • SERVER.SECURE.DOCBASE_OWNER_PASSWORD: The password of the schema owner’s account
  • SERVER.DOCBASE_SERVICE_NAME: The name of the service to be used. To be set only when using Oracle…
  • SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION: If this docbase/repository should be a Global Registry, then set this to “USE_THIS_REPOSITORY”, otherwise do not set the parameter. If the GR is on a remote host, you need to set this to “SPECIFY_DIFFERENT_REPOSITORY” and then use a few additional parameters to specify the name of the GR repo and the host it is currently running on
  • SERVER.BOF_REGISTRY_USER_LOGIN_NAME: The name of the BOF Registry account to be created. This is usually something like “dm_bof_registry”
  • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=The password to be used for the BOF Registry account
  • SERVER.ENABLE_XHIVE: Whether or not you want to enable the XML Store Feature. As I mentioned in the blog with the licences, this is one of the thing you might want to enable the licence during the docbase/repository configuration. If you want to enable the XHIVE, you will need to specify a few additional parameters like the XDB user/password, host and port, aso…
  • SERVER.CONFIGURATOR.DISTRIBUTED_ENV: Whether or not you want to enable/configure the DMS. If you set this to true, you will need to add a few more parameters like the DMS Action to be performed, the webserver port, host, password, aso…
  • SERVER.ENABLE_RKM: Whether or not you want to enable/configure the RKM. If you set this to true, you will need to add a few more parameters like the host/port on which the keys will be stored, the certificates and password, aso…
  • START_METHOD_SERVER: Whether or not you want the JMS to be re-started again once the docbase/repository has been created. Since we usually create at least 2 docbases/repositories, we can leave it stopped there
  • MORE_DOCBASE: Never change this value, it should remain as false as far as I know
  • SERVER.CONGINUE.MORECOMPONENT: Whether or not you want to configure some additional components. Same as above, I would always let it as false… I know that the name of this parameter is strange but that’s the name that is coming from the templates… But if you look a little bit on the internet, you might be able to find “SERVER.CONTINUE.MORE.COMPONENT” instead… So which one is working, which one isn’t is still a mystery for me. I use the first one but since I always set it to false, that doesn’t have any impact for me and I never saw any errors coming from the log files.

 

Once the properties file is ready, you can install the Global Registry repository using the following command:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbase_GR.properties

 

Contrary to previous installations, this will take some time (around 20 minutes) because it needs to install the docbase/repository, then there are DARs that need to be installed, aso… Unfortunately, there is no feedback on the progress, so you just need to wait and in case something goes wrong, you won’t even notice since there are no errors shown… Therefore, check the logs to be sure!

 

2. Other repository installation

Once you have a Global Registry repository installed, you can install the repository that will be used by the end-users (which isn’t a GR then). The properties file for an additional repository is as follow:

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/CS_Docbase_Other.properties
### Silent installation response file for a Docbase
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.CONFIGURATOR.LICENSING=false
SERVER.CONFIGURATOR.REPOSITORY=true
SERVER.CONFIGURATOR.BROKER=false

### Docbase parameters
SERVER.DOCBASE_ACTION=CREATE

common.use.existing.aek.lockbox=common.use.existing
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA_FOR_SAN_NAS=false
SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_01.dbi-services.com

SERVER.DOCBASE_NAME=Docbase1
SERVER.DOCBASE_ID=1010102
SERVER.DOCBASE_DESCRIPTION=Docbase1 repository for silent install blog

SERVER.PROJECTED_DOCBROKER_HOST=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT=1489
SERVER.TEST_DOCBROKER=true
SERVER.CONNECT_MODE=dual

SERVER.USE_EXISTING_DATABASE_ACCOUNT=true
SERVER.INDEXSPACE_NAME=DM_DOCBASE1_INDEX
SERVER.DATABASE_CONNECTION=DEMODBNAME
SERVER.DATABASE_ADMIN_NAME=docbase1
SERVER.SECURE.DATABASE_ADMIN_PASSWORD=d0cb4se1P4ssw0rdDB
SERVER.DOCBASE_OWNER_NAME=docbase1
SERVER.SECURE.DOCBASE_OWNER_PASSWORD=d0cb4se1P4ssw0rdDB
SERVER.DOCBASE_SERVICE_NAME=docbase1

### Common parameters
SERVER.ENABLE_XHIVE=false
SERVER.CONFIGURATOR.DISTRIBUTED_ENV=false
SERVER.ENABLE_RKM=false
START_METHOD_SERVER=true
MORE_DOCBASE=false
SERVER.CONGINUE.MORECOMPONENT=false

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CS_Docbase_Other.properties
[dmadmin@content_server_01 ~]$

 

I won’t list all these parameters again but just the ones that changed, except the docbase/repository name/id/description and DB accounts/tablespaces since these are pretty obvious:

  • Updated parameter’s value:
    • common.use.existing.aek.lockbox: As mentioned above, since the AEK key is now created (as part of the GR installation), this now need to be set to “common.use.existing” instead
  • Removed parameter (all these will be taken from the dfc.properties that has been updated as part of the GR installation):
    • SERVER.GLOBAL_REGISTRY_SPECIFY_OPTION
    • SERVER.BOF_REGISTRY_USER_LOGIN_NAME
    • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD

 

Once the properties file is ready, you can install the additional repository in the same way:

[dmadmin@content_server_01 ~]$ $DM_HOME/install/dm_launch_server_config_program.sh -f /tmp/dctm_install/CS_Docbase_Other.properties

 

You now know how to install and configure a Global Registry repository as well as any other docbase/repository on a “Primary” Content Server using the silent installation provided by Documentum. In a later blog, I will talk about specificities related to a “Remote” Content Server for a High Availability environment.

 

Cet article Documentum – Silent Install – Docbases/Repositories est apparu en premier sur Blog dbi services.

Documentum – Silent Install – D2

$
0
0

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed) as well as several repositories. In this one, we will see how to install D2 on a predefined list of docbases/repositories (on the Content Server side) and you will see that, here, the process is quite different.

D2 is supporting the silent installation since quite some time now and it is pretty easy to do. At the end of the D2 GUI Installer, there is a screen where you are asked if you want to generate a silent properties (response) file containing the information that have been set in the D2 GUI Installer. Therefore, this is a first way to start working with silent installation or you can just read this blog ;).

So, let’s start this with the preparation of a template file. I will use a lot of placeholders in the template and will replace the values with sed commands, just as a quick look at how you can script a silent installation with a template configuration file and some properties prepared before.

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/D2_template.xml
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/D2_template.xml
<?xml version="1.0" encoding="UTF-8"?>
<AutomatedInstallation langpack="eng">
  <com.izforge.izpack.panels.HTMLHelloPanel id="welcome"/>
  <com.izforge.izpack.panels.UserInputPanel id="SelectInstallOrMergeConfig">
    <userInput>
      <entry key="InstallD2" value="true"/>
      <entry key="MergeConfigs" value="false"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.HTMLInfoPanel id="readme"/>
  <com.izforge.izpack.panels.PacksPanel id="UNKNOWN (com.izforge.izpack.panels.PacksPanel)">
    <pack index="0" name="Installer files" selected="true"/>
    <pack index="1" name="D2" selected="###WAR_REQUIRED###"/>
    <pack index="2" name="D2-Config" selected="###WAR_REQUIRED###"/>
    <pack index="3" name="D2-API for Content Server/JMS" selected="true"/>
    <pack index="4" name="D2-API for BPM" selected="###BPM_REQUIRED###"/>
    <pack index="5" name="DAR" selected="###DAR_REQUIRED###"/>
  </com.izforge.izpack.panels.PacksPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.0">
    <userInput>
      <entry key="jboss5XCompliant" value="false"/>
      <entry key="webappsDir" value="###DOCUMENTUM###/D2-Install/war"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.2">
    <userInput>
      <entry key="pluginInstaller" value="###PLUGIN_LIST###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.3">
    <userInput>
      <entry key="csDir" value="###DOCUMENTUM###/D2-Install/D2-API"/>
      <entry key="bpmDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/bpm.ear"/>
      <entry key="jmsDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/ServerApps.ear"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.4">
    <userInput>
      <entry key="installationDir" value="###DOCUMENTUM###/D2-Install/DAR"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.5">
    <userInput>
      <entry key="dfsDir" value="/tmp/###DFS_SDK_PACKAGE###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.7">
    <userInput>
      <entry key="COMMON.USER_ACCOUNT" value="###INSTALL_OWNER###"/>
      <entry key="install.owner.password" value="###INSTALL_OWNER_PASSWD###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.8">
    <userInput>
      <entry key="SERVER.REPOSITORIES.NAMES" value="###DOCBASE_LIST###"/>
      <entry key="setReturnRepeatingValue" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.9">
    <userInput>
      <entry key="securityRadioSelection" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPD2ConfigOrClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseUsetheSameDFC">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableDFCSessionPool">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDFCKeyStoreInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetD2ConfigLanguage">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableD2BOCS">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetHideDomainforConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTemporaryMaxFiles">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="10">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="11">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfoForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="12">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="13">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="14">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="15">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="16">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="17">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="18">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="19">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="20">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="21">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="22">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTransferMode">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="24">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="25">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableAuditing">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPchooseWebAppServer">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskWebappsDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskNewWarDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.InstallPanel id="UNKNOWN (com.izforge.izpack.panels.InstallPanel)"/>
  <com.izforge.izpack.panels.XInfoPanel id="UNKNOWN (com.izforge.izpack.panels.XInfoPanel)"/>
  <com.izforge.izpack.panels.FinishPanel id="UNKNOWN (com.izforge.izpack.panels.FinishPanel)"/>
</AutomatedInstallation>

[dmadmin@content_server_01 ~]$

 

As you probably understood by looking at the above file, I’m using “/tmp/” for the input elements needed by D2 like the DFS package, the D2 installer or the D2+Pack Plugins and I’m using “$DOCUMENTUM/D2-Install” as the output folder where D2 generates its stuff into.

Once you have the template ready, you can replace the placeholders as follow (this is just an example of configuration based on the other silent blogs I wrote so far):

[dmadmin@content_server_01 ~]$ export d2_install_file=/tmp/dctm_install/D2.xml
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cp /tmp/dctm_install/D2_template.xml ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###WAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###BPM_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###DAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DOCUMENTUM###,$DOCUMENTUM," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###PLUGIN_LIST###,/tmp/D2_pluspack_4.7.0.P18/Plugins/C2-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/D2-Bin-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/O2-Install-4.7.0.jar;," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###JMS_HOME###,$DOCUMENTUM_SHARED/wildfly9.0.1," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DFS_SDK_PACKAGE###,emc-dfs-sdk-7.3," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ read -s -p "  ----> Please enter the Install Owner's password: " dm_pw; echo; echo
  ----> Please enter the Install Owner's password: <TYPE HERE THE PASSWORD>
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER###,dmadmin," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER_PASSWD###,${dm_pw}," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s/###DOCBASE_LIST###/Docbase1/" ${d2_install_file}
[dmadmin@content_server_01 ~]$

 

A short description of these properties as well as some notes on the values used above:

  • langpack: The language you are usually using for running the installers… English is fine if you use this template
  • entry key=”InstallD2″: Whether or not you want to install D2
  • entry key=”MergeConfigs”: Whether or not you want to merge the actual configuration/installation with the new one. I’m always restarting a D2 installation from scratch (removing the D2 hidden files for that) so I always set this to false
  • pack index=”0″ name=”Installer files”: Always set this to true to install D2 on a CS
  • pack index=”1″ name=”D2″: Whether or not you want to generate the D2 WAR file. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • pack index=”2″ name=”D2-Config”: Same as above but for the D2-Config WAR file
  • pack index=”3″ name=”D2-API for Content Server/JMS”: Whether or not you want the D2 Installer to put the D2 specific libraries into the JMS lib folder (path defined in: entry key=”jmsDir”). Even if you set this to true, you will still need to manually put a lot of D2 libs into the JMS lib folder because D2 only put a few of them but much more are required to run D2 properly (see documentation for the full list)
  • pack index=”4″ name=”D2-API for BPM”: Same as above but for the BPM this time (path defined in: entry key=”bpmDir”)
  • pack index=”5″ name=”DAR”: Whether or not you want to generate the DARs. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • entry key=”jboss5XCompliant”: I guess this is for the JBoss 5 support so if you are on Dctm 7.x, leave this as false
  • entry key=”webappsDir”: The path the D2 Installer will put the generated WAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/war” so this folder MUST exist before running the installer in silent
  • entry key=”pluginInstaller”: This one is a little bit trickier… It’s a semi-colon list of all D2+Pack Plugins you would like to install in addition to the D2. In the above, I’m using the C2, D2-Bin as well as O2 plugins. The D2+Pack package must obviously be extracted BEFORE running the installer in silent and all the paths MUST exist (you will need to extract the plugins jar from each plugin zip files). I opened a few bugs & enhancements requests for these so if you are facing an issue, let me know, I might be able to help you
  • entry key=”csDir”: The path the D2 Installer will put the generated libraries into. In this example, I set it to “$DOCUMENTUM/D2-Install/D2-API” so this folder MUST exist before running the installer in silent
  • entry key=”bpmDir”: The path the D2 Installer will put a few of the D2 libraries into for the BPM (it’s not all needed JARs and this parameter is obviously not needed if you set ###BPM_REQUIRED### to false)
  • entry key=”jmsDir”: Same as above but for the JMS this time
  • entry key=”installationDir”: The path the D2 Installer will put the generated DAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/DAR” so this folder MUST exist before running the installer in silent
  • entry key=”dfsDir”: The path where the DFS SDK can be found. The DFS SDK package MUST be extracted in this folder before running the installer in silent
  • entry key=”COMMON.USER_ACCOUNT”: The name of the Documentum Installation Owner
  • entry key=”install.owner.password”: The password of the Documentum Installation Owner. I used above a “read -s” command so it doesn’t appear on the command line, but it will be put in clear text in the xml file…
  • entry key=”SERVER.REPOSITORIES.NAMES”: A comma separated list of all docbases/repositories (without spaces) that need to be configured for D2. The DARs will be installed automatically on these docbases/repositories and if you want to do it properly, it mustn’t contain the GR. You could potentially add the GR in this parameter but all D2 DARs would be installed into the GR and this isn’t needed… Only the “D2-DAR.dar” and “Collaboration_Services.dar” are needed to be installed on the GR so I only add normal docbases/repositories in this parameter and once D2 is installed, I manually deploy these two DARs into the GR (I wrote a blog about deploying DARs easily to a docbase a few years ago if you are interested). So, here I have a value of “Docbase1″ but if you had two, you could set it to “Docbase1,Docbase2″
  • entry key=”setReturnRepeatingValue”: Whether or not you want the repeating values. A value of true should set the “return_top_results_row_based=false” in the server.ini
  • entry key=”securityRadioSelection”: A value of true means that D2 have to apply Security Rules to content BEFORE applying AutoLink and a value of false means that D2 can do it AFTER only
  • That’s the end of this file because I’m using D2 4.7 and in D2 4.7, there is no Lockbox anymore! If you are using previous D2 versions, you will need to put additional parameters for the D2 Lockbox generation, location, password, aso…

 

Once the properties file is ready, you can install D2 using the following command:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -DTRACE=true -DDEBUG=true -Djava.io.tmpdir=$DOCUMENTUM/D2-Install/tmp -jar /tmp/D2_4.7.0_P18/D2-Installer-4.7.0.jar ${d2_install_file}

 

You now know how to install D2 on a Content Server using the silent installation provided by D2. As you saw above, it is quite different compared to all Documentum components silent installation, but it is working so… Maybe at some point in the future, D2 will switch to use the same kind of properties file as Documentum.

 

Cet article Documentum – Silent Install – D2 est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Remote Docbases/Repositories (HA)

$
0
0

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed), several repositories and finally D2. In this one, we will see how to install remote docbases/repositories to have a High Availability environment with the docbases/repositories that we already installed in silent.

As mentioned in the first blog of this series, there is a utility under “$DM_HOME/install/silent/silenttool” that can be used to generate a skeleton for a CFS/Remote CS but there are still missing parameters so I will describe them in this blog.

In this blog, I will also configure the Global Repository (GR) in HA so that you have it available even if the first node fails… This is particularly important if, like me, you prefer to set the GR as the crypto repository (so it is the repository used for encryption/decryption).

 

1. Documentum Remote Global Registry repository installation

The properties file for a Remote GR installation is as follow (it supposes that you already have the binaries and a docbroker installed on this Remote CS):

[dmadmin@content_server_02 ~]$ vi /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$ cat /tmp/dctm_install/CFS_Docbase_GR.properties
### Silent installation response file for a Remote Docbase (GR)
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.COMPONENT_ACTION=CREATE

### Docbase parameters
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_02.dbi-services.com

SERVER.DOCBASE_NAME=gr_docbase
SERVER.PRIMARY_SERVER_CONFIG_NAME=gr_docbase
CFS_SERVER_CONFIG_NAME=content_server_02_gr_docbase
SERVER.DOCBASE_SERVICE_NAME=gr_docbase
SERVER.REPOSITORY_USERNAME=dmadmin
SERVER.SECURE.REPOSITORY_PASSWORD=dm4dm1nP4ssw0rd
SERVER.REPOSITORY_USER_DOMAIN=
SERVER.REPOSITORY_USERNAME_WITH_DOMAIN=dmadmin
SERVER.REPOSITORY_HOSTNAME=content_server_01.dbi-services.com

SERVER.USE_CERTIFICATES=false

SERVER.PRIMARY_CONNECTION_BROKER_HOST=content_server_01.dbi-services.com
SERVER.PRIMARY_CONNECTION_BROKER_PORT=1489
SERVER.PROJECTED_CONNECTION_BROKER_HOST=content_server_02.dbi-services.com
SERVER.PROJECTED_CONNECTION_BROKER_PORT=1489

SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED=true
SERVER.PROJECTED_DOCBROKER_HOST_OTHER=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT_OTHER=1489
SERVER.GLOBAL_REGISTRY_REPOSITORY=gr_docbase
SERVER.BOF_REGISTRY_USER_LOGIN_NAME=dm_bof_registry
SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=dm_b0f_reg1s7ryP4ssw0rd

[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$

 

Just like in previous blog, I will let you set the DATA and SHARE folders as you want to.

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.COMPONENT_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • common.aek.passphrase.password: The password used for the AEK on the Primary CS
  • common.aek.key.name: The name of the AEK key used on the Primary CS. This is usually something like “CSaek”
  • common.aek.algorithm: The algorithm used for the AEK key. I would recommend the strongest one, if possible: “AES_256_CBC”
  • SERVER.ENABLE_LOCKBOX: Whether or not you used a Lockbox to protect the AEK key on the Primary CS. If set to true, the lockbox will be downloaded from the Primary CS, that’s why you don’t need the “common.use.existing.aek.lockbox” property
  • SERVER.LOCKBOX_FILE_NAME: The name of the Lockbox used on the Primary CS. This is usually something like “lockbox.lb”
  • SERVER.LOCKBOX_PASSPHRASE.PASSWORD: The password used for the Lockbox on the Primary CS
  • SERVER.DOCUMENTUM_DATA: The path to be used to store the Documentum documents, accessible from all Content Servers which will host this docbase/repository
  • SERVER.DOCUMENTUM_SHARE: The path to be used for the share folder
  • SERVER.FQDN: The Fully Qualified Domain Name of the current host the docbase/repository is being installed on
  • SERVER.DOCBASE_NAME: The name of the docbase/repository created on the Primary CS (dm_docbase_config.object_name)
  • SERVER.PRIMARY_SERVER_CONFIG_NAME: The name of the dm_server_config object created on the Primary CS
  • CFS_SERVER_CONFIG_NAME: The name of dm_server_config object to be created for this Remote CS
  • SERVER.DOCBASE_SERVICE_NAME: The name of the service to be used
  • SERVER.REPOSITORY_USERNAME: The name of the Installation Owner. I believe it can be any superuser account but I didn’t test it
  • SERVER.SECURE.REPOSITORY_PASSWORD: The password of the above account
  • SERVER.REPOSITORY_USER_DOMAIN: The domain of the above account. If using an inline user like the Installation Owner, you should keep it empty
  • SERVER.REPOSITORY_USERNAME_WITH_DOMAIN: Same value as the REPOSITORY_USERNAME if the USER_DOMAIN is kept empty
  • SERVER.REPOSITORY_HOSTNAME: The Fully Qualified Domain Name of the Primary CS
  • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbase/repository (it goes with the SERVER.CONNECT_MODE). If you set this to true, you will have to add the usual additional parameters, just like for the Primary CS
  • SERVER.PRIMARY_CONNECTION_BROKER_HOST: The Fully Qualified Domain Name of the Primary CS
  • SERVER.PRIMARY_CONNECTION_BROKER_PORT: The port used by the docbroker/connection broker on the Primary CS
  • SERVER.PROJECTED_CONNECTION_BROKER_HOST: The hostname to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.PROJECTED_CONNECTION_BROKER_PORT: The port to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED: Whether or not you want to validate the GR on the Primary CS. I always set this to true for the first docbase/repository installed on the Remote CS (in other words: for the GR installation). If you set this to true, you will have to provide some additional parameters:
    • SERVER.PROJECTED_DOCBROKER_HOST_OTHER: The Fully Qualified Domain Name of the docbroker/connection broker that the GR on the Primary CS projects to so this is usually the Primary CS…
    • SERVER.PROJECTED_DOCBROKER_PORT_OTHER: The port of the docbroker/connection broker that the GR on the Primary CS projects to
    • SERVER.GLOBAL_REGISTRY_REPOSITORY: The name of the GR repository
    • SERVER.BOF_REGISTRY_USER_LOGIN_NAME: The name of the BOF Registry account created on the Primary CS inside the GR repository. This is usually something like “dm_bof_registry”
    • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD: The password used by the BOF Registry account

 

Once the properties file is ready, first make sure the gr_docbase is running on the “Primary” CS (content_server_01) and then start the CFS installer using the following commands:

[dmadmin@content_server_02 ~]$ dmqdocbroker -t content_server_01.dbi-services.com -p 1489 -c getservermap gr_docbase
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content_server_01.dbi-services.com
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**           S E R V E R     M A P              **
**************************************************
Docbase gr_docbase has 1 server:
--------------------------------------------
  server name         :  gr_docbase
  server host         :  content_server_01.dbi-services.com
  server status       :  Open
  client proximity    :  1
  server version      :  7.3.0050.0039  Linux64.Oracle
  server process id   :  12345
  last ckpt time      :  6/12/2018 14:23:35
  next ckpt time      :  6/12/2018 14:28:35
  connect protocol    :  TCP_RPC
  connection addr     :  INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
  keep entry interval :  1440
  docbase id          :  1010101
  server dormancy status :  Active
--------------------------------------------
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ $DM_HOME/install/dm_launch_cfs_server_config_program.sh -f /tmp/dctm_install/CFS_Docbase_GR.properties

 

Don’t forget to check the logs once done to make sure it went without issue!

 

2. Other Remote repository installation

Once you have the Remote Global Registry repository installed, you can install the Remote repository that will be used by the end-users (which isn’t a GR then). The properties file for an additional remote repository is as follow:

[dmadmin@content_server_02 ~]$ vi /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$ cat /tmp/dctm_install/CFS_Docbase_Other.properties
### Silent installation response file for a Remote Docbase
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.COMPONENT_ACTION=CREATE

### Docbase parameters
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_02.dbi-services.com

SERVER.DOCBASE_NAME=Docbase1
SERVER.PRIMARY_SERVER_CONFIG_NAME=Docbase1
CFS_SERVER_CONFIG_NAME=content_server_02_Docbase1
SERVER.DOCBASE_SERVICE_NAME=Docbase1
SERVER.REPOSITORY_USERNAME=dmadmin
SERVER.SECURE.REPOSITORY_PASSWORD=dm4dm1nP4ssw0rd
SERVER.REPOSITORY_USER_DOMAIN=
SERVER.REPOSITORY_USERNAME_WITH_DOMAIN=dmadmin
SERVER.REPOSITORY_HOSTNAME=content_server_01.dbi-services.com

SERVER.USE_CERTIFICATES=false

SERVER.PRIMARY_CONNECTION_BROKER_HOST=content_server_01.dbi-services.com
SERVER.PRIMARY_CONNECTION_BROKER_PORT=1489
SERVER.PROJECTED_CONNECTION_BROKER_HOST=content_server_02.dbi-services.com
SERVER.PROJECTED_CONNECTION_BROKER_PORT=1489

[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$

 

I won’t list all these parameters again because as you can see above, it is exactly the same, except the docbase/repository name. Only the last section regarding the GR validation isn’t needed anymore. Once the properties file is ready, you can install the additional remote repository in the same way:

[dmadmin@content_server_02 ~]$ dmqdocbroker -t content_server_01.dbi-services.com -p 1489 -c getservermap Docbase1
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content_server_01.dbi-services.com
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**           S E R V E R     M A P              **
**************************************************
Docbase Docbase1 has 1 server:
--------------------------------------------
  server name         :  Docbase1
  server host         :  content_server_01.dbi-services.com
  server status       :  Open
  client proximity    :  1
  server version      :  7.3.0050.0039  Linux64.Oracle
  server process id   :  23456
  last ckpt time      :  6/12/2018 14:46:42
  next ckpt time      :  6/12/2018 14:51:42
  connect protocol    :  TCP_RPC
  connection addr     :  INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
  keep entry interval :  1440
  docbase id          :  1010102
  server dormancy status :  Active
--------------------------------------------
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ $DM_HOME/install/dm_launch_cfs_server_config_program.sh -f /tmp/dctm_install/CFS_Docbase_Other.properties

 

At this point, you will have the second docbases/repositories dm_server_config object created but that’s pretty much all you got… For a correct/working HA solution, you will need to configure the jobs for HA support (is_restartable, method_verb, …), maybe change the checkpoint_interval, configure the projections, trust the needed DFC clients (JMS applications), aso…

 

You now know how to install and configure a Global Registry repository as well as any other docbase/repository on a “Remote” Content Server (CFS) using the silent installation provided by Documentum.

 

Cet article Documentum – Silent Install – Remote Docbases/Repositories (HA) est apparu en premier sur Blog dbi services.


Documentum – Silent Install – xPlore binaries & Dsearch

$
0
0

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here) and finally D2. I believe I only have 2 blogs left and they are both related to xPlore. In this one, we will see how to install the xPlore binaries as well as configure a first instance (Dsearch here) on it.

Just like other Documentum components, you can find some silent installation files or at least a template for the xPlore part. On the Full Text side, it is actually easier to find these silent files because they are included directly into the tar installation package so you will be able to find the following files as soon as you extract the package (xPlore 1.6):

  • installXplore.properties: Contains the template to install the FT binaries
  • configXplore.properties: Contains the template to install a FT Dsearch (primary, secondary) or a CPS only
  • configIA.properties: Contains the template to install a FT IndexAgent

 

In addition to that and contrary to most of the Documentum components, you can actually find a documentation about most of the xPlore silent parameters so if you have questions, you can check the documentation.

 

1. Documentum xPlore binaries installation

The properties file for the xPlore binaries installation is really simple:

[xplore@full_text_server_01 ~]$ cd /tmp/xplore_install/
[xplore@full_text_server_01 xplore_install]$ tar -xvf xPlore_1.6_linux-x64.tar
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ chmod 750 setup.bin
[xplore@full_text_server_01 xplore_install]$ rm xPlore_1.6_linux-x64.tar
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ ls *.properties
configIA.properties  configXplore.properties  installXplore.properties
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ vi FT_Installation.properties
[xplore@full_text_server_01 xplore_install]$ cat FT_Installation.properties
### Silent installation response file for FT binary
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
SMTP_HOST=localhost
ADMINISTRATOR_EMAIL_ADDRESS=xplore@full_text_server_01.dbi-services.com

[xplore@full_text_server_01 xplore_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you want to install xPlore on. This will be the base folder under which the binaries will be installed. I put here /opt/xPlore but you can use whatever you want
  • SMTP_HOST: The host to target for the SMTP (emails)
  • ADMINISTRATOR_EMAIL_ADDRESS: The email address to be used for the watchdog. If you do not specify the SMTP_HOST and ADMINISTRATOR_EMAIL_ADDRESS properties, the watchdog configuration will end-up with a non-fatal error, meaning that the binaries installation will still be working without issue but you will have to add these manually for the watchdog if you want to use it. If you don’t want to use it, you can go ahead without, the Dsearch and IndexAgents will work properly without but obviously you are loosing the features that the watchdog brings

 

Once the properties file is ready, you can install the Documentum xPlore binaries in silent using the following command:

[xplore@full_text_server_01 xplore_install]$ ./setup.bin -f FT_Installation.properties

 

2. Documentum xPlore Dsearch installation

I will use below a lot the word “Dsearch” but this section can actually be used to install any instances: Primary Dsearch, Secondary Dsearch or even CPS only. Once you have the binaries installed, you can install a first Dsearch (PrimaryDsearch usually or PrimaryEss) that will be used for the Full Text indexing. The properties file for this component is as follow:

[xplore@full_text_server_01 xplore_install]$ vi FT_Dsearch_Installation.properties
[xplore@full_text_server_01 xplore_install]$ cat FT_Dsearch_Installation.properties
### Silent installation response file for Dsearch
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
ess.configMode.primary=1
ess.configMode.secondary=0
ess.configMode.upgrade=0
ess.configMode.delete=0
ess.configMode.cpsonly=0

### Other configurations
ess.primary=true
ess.sparenode=0

ess.data_dir=/opt/xPlore/data
ess.config_dir=/opt/xPlore/config

ess.primary_host=full_text_server_01.dbi-services.com
ess.primary_port=9300
ess.xdb-primary-listener-host=full_text_server_01.dbi-services.com
ess.xdb-primary-listener-port=9330
ess.transaction_log_dir=/opt/xPlore/config/wal/primary

ess.name=PrimaryDsearch
ess.FQDN=full_text_server_01.dbi-services.com

ess.instance.password=ds34rchAdm1nP4ssw0rd
ess.instance.port=9300

ess.ess.active=true
ess.cps.active=false
ess.essAdmin.active=true

ess.xdb-listener-port=9330
ess.admin-rmi-port=9331
ess.cps-daemon-port=9321
ess.cps-daemon-local-port=9322

common.installOwner.password=ds34rchAdm1nP4ssw0rd
admin.username=admin
admin.password=ds34rchAdm1nP4ssw0rd

[xplore@full_text_server_01 xplore_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the system supports a 64 bits architecture
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • ess.configMode.primary: Whether or not you want to install a Primary Dsearch (binary value)
  • ess.configMode.secondary: Whether or not you want to install a Secondary Dsearch (binary value)
  • ess.configMode.upgrade: Whether or not you want to upgrade an instance (binary value)
  • ess.configMode.delete: Whether or not you want to delete an instance (binary value)
  • ess.configMode.cpsonly: Whether or not you want to install a CPS only and not a Primary/Secondary Dsearch (binary value)
  • ess.primary: Whether or not this instance is a primary instance (set this to true if installing a primary instance)
  • ess.sparenode: Whether or not the secondary instance is to be used as a spare node. This should be set to 1 only if “ess.configMode.secondary=1″ and you want it to be a spare node only
  • ess.data_dir: The path to be used to contain the instance data. For a single-node, this is usually /opt/xPlore/data and for a multi-node, it needs to be a shared folder between the different nodes of the multi-node
  • ess.config_dir: Same as “ess.data_dir” but for the config folder
  • ess.primary_host: The Fully Qualified Domain Name of the primary Dsearch this new instance will be linked to. Here we are installing a Primary Dsearch so it is the local host
  • ess.primary_port: The port that the primary Dsearch is/will be using
  • ess.xdb-primary-listener-host: The Fully Qualified Domain Name of the host where the xDB has been installed on for the primary Dsearch. This is usually the same value as “ess.primary_host”
  • ess.xdb-primary-listener-port: The port that the xDB is/will be using for the primary Dsearch. This is usually the value of “ess.primary_port” + 30
  • ess.transaction_log_dir: The path to be used to store the xDB transaction logs. This is usually under the “ess.config_dir” folder (E.g.: /opt/xPlore/config/wal/primary)
  • ess.name: The name of the instance to be installed. For a primary Dsearch, it is usually something like PrimaryDsearch
  • ess.FQDN: The Fully Qualified Domain Name of the current host the instance is being installed on
  • ess.instance.password: The password to be used for the new instance (xDB Administrator & superuser). Using the GUI installer, you can only set 1 password and it will be used for everything (JBoss admin, xDB Administrator, xDB superuser). In silent, you can separate them a little bit, if you want to
  • ess.instance.port: The port of the instance to be installed. For a primary Dsearch, it is usually 9300
  • ess.ess.active: Whether or not you want to enable/deploy the Dsearch (set this to true if installing a primary or secondary instance)
  • ess.cps.active: Whether or not you want to enable/deploy the CPS (already included in the Dsearch so set this to true only if installing a CPS Only)
  • ess.essAdmin.active: Whether or not you want to enable/deploy the Dsearch Admin
  • ess.xdb-listener-port: The port to be used by the xDB for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 30
  • ess.admin-rmi-port: The port to be used by the RMI for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 31
  • ess.cps-daemon-port: I’m not sure what this is used for because the correct port for the CPS daemon0 (on a primary Dsearch) is the next parameter but I know that the default value for this is usually “ess.instance.port” + 21. It is possible that this parameter is only used in case the new instance is a CPS Only because this port (instance port + 21) is used on a CPS Only host as Daemon0 so it would make sense… To be confirmed!
  • ess.cps-daemon-local-port: The port to be used by the CPS daemon0 for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 22. You need a few ports available after this one in case you are going to have several CPS daemons (9322, 9323, 9324, …)
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue
  • admin.username: The name of the JBoss instance admin account to be created
  • admin.password: The password of the above-mentioned account

 

Once the properties file is ready, you can install the Documentum xPlore instance in silent using the following command:

[xplore@full_text_server_01 xplore_install]$ /opt/xPlore/setup/dsearch/dsearchConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f FT_Dsearch_Installation.properties

 

You now know how to install the Full Text binaries and a first instance on top of it using the silent installation provided by Documentum.

 

Cet article Documentum – Silent Install – xPlore binaries & Dsearch est apparu en premier sur Blog dbi services.

Documentum – Silent Install – xPlore IndexAgent

$
0
0

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here), D2 and finally the xPlore binaries & Dsearch. This blog will be the last one of this series related to silent installation on Documentum and it will be about how to install an xPlore IndexAgent on the existing docbase/repository created previously.

So let’s start, as always, with the preparation of the properties file:

[xplore@full_text_server_01 ~]$ vi /tmp/xplore_install/FT_IA_Installation.properties
[xplore@full_text_server_01 ~]$ cat /tmp/xplore_install/FT_IA_Installation.properties
### Silent installation response file for Indexagent
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
indexagent.configMode.create=1
indexagent.configMode.upgrade=0
indexagent.configMode.delete=0
indexagent.configMode.create.migration=0

### Other configurations
indexagent.ess.host=full_text_server_01.dbi-services.com
indexagent.ess.port=9300

indexagent.name=Indexagent_Docbase1
indexagent.FQDN=full_text_server_01.dbi-services.com
indexagent.instance.port=9200
indexagent.instance.password=ind3x4g3ntAdm1nP4ssw0rd

indexagent.docbase.name=Docbase1
indexagent.docbase.user=dmadmin
indexagent.docbase.password=dm4dm1nP4ssw0rd

indexagent.connectionBroker.host=content_server_01.dbi-services.com
indexagent.connectionBroker.port=1489

indexagent.globalRegistryRepository.name=gr_docbase
indexagent.globalRegistryRepository.user=dm_bof_registry
indexagent.globalRegistryRepository.password=dm_b0f_reg1s7ryP4ssw0rd

indexagent.storage.name=default
indexagent.local_content_area=/opt/xPlore/wildfly9.0.1/server/DctmServer_Indexagent_Docbase1/data/Indexagent_Docbase1/export

common.installOwner.password=ind3x4g3ntAdm1nP4ssw0rd

[xplore@full_text_server_01 ~]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the below mentioned java is a 32 or 64 bits
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • indexagent.configMode.create: Whether or not you want to install an IndexAgent (binary value)
  • indexagent.configMode.upgrade: Whether or not you want to upgrade an IndexAgent (binary value)
  • indexagent.configMode.delete: Whether or not you want to delete an IndexAgent (binary value)
  • indexagent.configMode.create.migration: This isn’t used anymore in recent installer versions but I still don’t know what was its usage before… In any cases, set this to 0 ;)
  • indexagent.ess.host: The Fully Qualified Domain Name of the primary Dsearch this new IndexAgent will be linked to
  • indexagent.ess.port: The port that the primary Dsearch is using
  • indexagent.name: The name of the IndexAgent to be installed. The default name is usually Indexagent_<docbase_name>
  • indexagent.FQDN: The Fully Qualified Domain Name of the current host the IndexAgent is being installed on
  • indexagent.instance.port: The port that the IndexAgent is/will be using (HTTP)
  • indexagent.instance.password: The password to be used for the new IndexAgent JBoss admin
  • indexagent.docbase.name: The name of the docbase/repository that this IndexAgent is being installed for
  • indexagent.docbase.user: The name of an account on the target docbase/repository to be used to configure the objects (updating the dm_server_config, dm_ftindex_agent_config, aso…) and that has the needed permissions for that
  • indexagent.docbase.password: The password of the above-mentioned account
  • indexagent.connectionBroker.host: The Fully Qualified Domain Name of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.connectionBroker.port: The port of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.globalRegistryRepository.name: The name of the GR repository
  • indexagent.globalRegistryRepository.user: The name of the BOF Registry account created on the CS inside the GR repository. This is usually something like “dm_bof_registry”
  • indexagent.globalRegistryRepository.password: The password used by the BOF Registry account
  • indexagent.storage.name: The name of the storage location to be created. The default one is “default”. If you intend to create new collections, you might want to give it a more meaningful name
  • indexagent.local_content_area: The path to be used to store the content temporarily on the file system. The value I used above is the default one but you can put it wherever you want. If you are using a multi-node, this path needs to be accessible from all nodes of the multi-node so you can put it under the “ess.data_dir” folder for example
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue

 

Once the properties file is ready, make sure that the Dsearch this IndexAgent is linked to is currently running (http(s)://<indexagent.ess.host>:<indexagent.ess.port>/dsearchadmin), make sure that the Global Registry repository (gr_docbase) as well as the target repository (Docbase1) are running and then you can install the Documentum IndexAgent in silent using the following command:

[xplore@full_text_server_01 ~]$ /opt/xPlore/setup/indexagent/iaConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f /tmp/xplore_install/FT_IA_Installation.properties

 

This now concludes the series about Documentum silent installation. There are other components that support the silent installation like the Process Engine for example but usually they require only a few parameters (or even none) so that’s why I’m not including them here.

 

Cet article Documentum – Silent Install – xPlore IndexAgent est apparu en premier sur Blog dbi services.

Strange behavior when patching GI/ASM

$
0
0

I tried to apply a patch to my 18.3.0 GI/ASM two node cluster on RHEL 7.5.
The first node worked fine, but the second node got always an error…

Environment:
Server Node1: dbserver01
Server Node2: dbserver02
Oracle Version: 18.3.0 with PSU OCT 2018 ==> 28660077
Patch to be installed: 28655784 (RU 18.4.0.0)

First node (dbserver01)
Everything fine:

cd ${ORACLE_HOME}/OPatch
sudo ./opatchauto apply /tmp/28655784/
...
Sucessfull

Secondary node (dbserver02)
Same command but different output:

cd ${ORACLE_HOME}/Patch
sudo ./opatchauto apply /tmp/28655784/
...
Remote command execution failed due to No ECDSA host key is known for dbserver01 and you have requested strict checking.
Host key verification failed.
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

After playing around with the keys I found out, that the host keys had to be exchange also for root.
So I connected as root and made an ssh from dbserver01 to dbserver02 and from dbserver02 to dbserver01.

After I exchanged the host keys the error message changed:

Remote command execution failed due to Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

So I investigated the log file a litte further and the statement with the error was:

/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 dbserver01 \
/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 dbserver01 \
/u00/app/oracle/product/18.3.0/dbhome_1//perl/bin/perl \
/u00/app/oracle/product/18.3.0/dbhome_1/OPatch/auto/database/bin/RemoteHostExecutor.pl \
-GRID_HOME=/u00/app/oracle/product/18.3.0/grid_1 \
-OBJECTLOC=/u00/app/oracle/product/18.3.0/dbhome_1//cfgtoollogs/opatchautodb/hostdata.obj \
-CRS_ACTION=get_all_homes -CLUSTERNODES=dbserver01,dbserver02,dbserver02 \
-JVM_HANDLER=oracle/dbsysmodel/driver/sdk/productdriver/remote/RemoteOperationHelper

Soooooo: dbserver02 starts a ssh session to dbserver01 and from there an additional session to dbserver01 (himself).
I don’t know why but it is as it is….after I did a keyexchange from dbserver01 (root) to dbserver01 (root) the patching worked fine.
At the moment I can not remeber that I ever had to do a keyexchange from the root User on to the same host.

Did you got the same problem or do you know a better way to do that? Write me a comment!

Cet article Strange behavior when patching GI/ASM est apparu en premier sur Blog dbi services.

Documentum Upgrade – Missing DARs after upgrade

$
0
0

As part of the same migration & upgrade project I talked about in previous blogs already (corrupt lockbox, duplicate objects & wrong target_server), I have seen a very annoying and, this time, absolutely not consistent behavior in some upgrade from Documentum 7.x to 16.x versions. The issue or rather the issues I had was that random DAR files were not installed properly. This makes it rather difficult to anticipate since you basically don’t know what might fail before you actually do it for real. Performing DryRun helps a lot in anticipating potential (recurring) problems but if the issue itself is random, there isn’t much you can do without some gifts (if you can see the future, please reach out to me!)…

 

In the past couple months, I performed around a dozen {migration+upgrade} and about half of these had issues with random DARs installation during the upgrade process. Even a DryRun and a real execution of the exact same procedure using the exact same source system ended-up with two different results: one worked without issue (the real migration fortunately) while the DryRun ended-up with a missing dar. In the procedure, it is checked whether or not there are any locks on repository objects, whether there are inconsistencies, whether there are any tasks in progress, aso…

 

Issues were mostly linked to the following few DARs:

  • LDAP.dar
  • MessagingApp.dar
  • MailApp.dar

 

I. LDAP

First, regarding the LDAP dar file, it only happened once and it was pretty easy to spot. As part of the migrations, I had to change the LDAP Server used. Since the target system was on Kubernetes using complete CI/CD, we automated the creation of the LDAP Config Object with all its parameters but this piece failed for one of the migration. Replicating the issue showed the following outcome:

[dmadmin@stg_cs ~]$ iapi REPO1
Please enter a user (dmadmin):
Please enter password for dmadmin:

		OpenText Documentum iapi - Interactive API interface
		Copyright (c) 2018. OpenText Corporation
		All rights reserved.
		Client Library Release 16.4.0170.0080

Connecting to Server using docbase REPO1
[DM_SESSION_I_SESSION_START]info:  "Session 010f123450262d3b started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0170.0234  Linux64.Oracle
Session id is s0
API> ?,c,select r_object_id, object_name from dm_ldap_config
r_object_id       object_name
----------------  ------------------------
(0 rows affected)

API> create,c,dm_ldap_config
...
[DM_DFC_E_CLASS_NOT_FOUND]error:  "Unable to instantiate the necessary java class: com.documentum.ldap.impl.DfLdapConfig"

java.lang.ClassNotFoundException: com.documentum.ldap.impl.DfLdapConfig

com.documentum.thirdparty.javassist.NotFoundException: com.documentum.ldap.impl.DfLdapConfig


API> ?,c,SELECT r_object_id, r_modify_date, object_name FROM dmc_dar ORDER BY r_modify_date ASC;
r_object_id       r_modify_date              object_name
----------------  -------------------------  ------------------------
080f1234500007a5  12/1/2018 09:05:30         LDAP
080f12345086063d  2/12/2020 16:26:12         Smart Container
080f123450860780  2/12/2020 16:26:44         Webtop
080f1234508607a1  2/12/2020 16:26:59         Workflow
080f1234508607f9  2/12/2020 16:27:34         Presets
...

API> exit
Bye
[dmadmin@stg_cs ~]$

 

This kind of error ([DM_DFC_E_CLASS_NOT_FOUND]error: “Unable to instantiate the necessary java class: com.documentum.ldap.impl.DfLdapConfig”) can happen when the LDAP dar isn’t installed properly. In this case, during the upgrade it was indeed what happened, the current DAR seemed to be from the source system before the upgrade (r_modify_date is much older). The DAR installation log file generated by the upgrade shows that the LDAP one was skipped:

[dmadmin@stg_cs ~]$ grep "\[ERR" $DOCUMENTUM/dba/config/REPO1/dars.log
[ERROR]  A module 'IDfLdapConfigModule' already exists under folder 'IDfLdapConfigModule'.
[dmadmin@stg_cs ~]$

 

After re-install of the LDAP dar, the issue was resolved.

 

II. MessagingApp

Then regarding the MessagingApp dar file, this one also only happened once and it was very strange… While doing sanity checks after the end of the migration, everything was working except for searches from a client application like DA or D2. From the repository itself, full text searches were working properly:

API> ?,c,SELECT r_object_id, object_name FROM dm_document SEARCH document contains 'TestDocument';
r_object_id       object_name
----------------  --------------------
090f2345600731d6  TestDoc.pdf
(1 row affected)

 

However, doing the same kind of search on D2 for example showed something completely different:

2020-03-03 10:30:55,750 UTC [INFO ] ([ACTIVE] ExecuteThread: '70' for queue: 'weblogic.kernel.Default (self-tuning)') - c.e.x3.server.services.RpcDoclistServiceImpl  : Context REPO2-1583231056848-dmadmin-2003987903 with terms = TestDocument
2020-03-03 10:30:55,751 UTC [DEBUG] ([ACTIVE] ExecuteThread: '70' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.aspects.InjectSessionAspect   : Call first service D2SearchService.getQuickSearchContentWithOption(..)
2020-03-03 10:30:55,751 UTC [DEBUG] ([ACTIVE] ExecuteThread: '70' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.aspects.InjectSessionAspect   : InjectSessionAspect::process method: com.emc.d2fs.dctm.web.services.search.D2SearchService.getQuickSearchContentWithOption
...
...
2020-03-03 10:31:01,289 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.common.dctm.queries.D2QueryBuilder    : Query History: IDfQueryEvent(INTERNAL, DEFAULT): [REPO2] returned [Start processing] at [2020-03-03 10:30:56:007 +0000]
IDfQueryEvent(ERROR, UNKNOWN): [REPO2] returned [[DM_VEL_INSTANTIATION_ERROR]error:  "Cannot instantiate Java class"] at [2020-03-03 10:31:01:280 +0000]
DfServiceInstantiationException:: THREAD: Search Broker:REPO2:processing started at Tue Mar 03 10:30:55 UTC 2020; MSG: [DM_VEL_INSTANTIATION_ERROR]error:  "Cannot instantiate Java class"; ERRORCODE: 1902; NEXT: null
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.loadModuleClass(ModuleManager.java:258)
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.getModuleClass(ModuleManager.java:203)
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.newModule(ModuleManager.java:154)
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.newModule(ModuleManager.java:86)
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.newModule(ModuleManager.java:60)
        at com.documentum.fc.client.DfClient$ClientImpl.newModule(DfClient.java:466)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.definition.ComplexMappingDefinitionManager.getMappingModule(ComplexMappingDefinitionManager.java:352)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.definition.ComplexMappingDefinitionManager.getComplexMappingDefinitionFromDocbase(ComplexMappingDefinitionManager.java:319)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.definition.ComplexMappingDefinitionManager.loadComplexMappingDefinition(ComplexMappingDefinitionManager.java:149)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.definition.ComplexMappingDefinitionManager.getComplexMappingDefinition(ComplexMappingDefinitionManager.java:75)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.definition.loading.legacy.LegacyMappingLoader.loadSearchInterfaces(LegacyMappingLoader.java:42)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.definition.EosMappingLoader.populateLegacyMapping(EosMappingLoader.java:199)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.definition.EosMappingLoader.populateMappingCache(EosMappingLoader.java:112)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.definition.EosMappingLoader.getInterface(EosMappingLoader.java:63)
        at com.documentum.fc.client.search.impl.generation.docbase.common.sco.mapping.SCOGenerator.isComplexQuery(SCOGenerator.java:38)
        at com.documentum.fc.client.search.impl.generation.docbase.TargetLanguageSelector.initByQueryBuilder(TargetLanguageSelector.java:85)
        at com.documentum.fc.client.search.impl.generation.docbase.TargetLanguageSelector.<init>(TargetLanguageSelector.java:39)
        at com.documentum.fc.client.search.impl.generation.docbase.DocbaseQueryGeneratorManager.generateQueryExecutor(DocbaseQueryGeneratorManager.java:248)
        at com.documentum.fc.client.search.impl.generation.docbase.DocbaseQueryGeneratorManager.generateQueryExecutor(DocbaseQueryGeneratorManager.java:96)
        at com.documentum.fc.client.search.impl.execution.adapter.docbase.DocbaseAdapter.execute(DocbaseAdapter.java:83)
        at com.documentum.fc.client.search.impl.execution.broker.SearchJob.handleProcessingState(SearchJob.java:382)
        at com.documentum.fc.client.search.impl.execution.broker.SearchJob.doRunLoop(SearchJob.java:477)
        at com.documentum.fc.client.search.impl.execution.broker.SearchJob.run(SearchJob.java:433)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.documentum.services.complexobjects.impl.ComplexObjectMappingDefImpl
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at com.documentum.fc.client.impl.bof.classmgmt.URLClassLoaderEx.findClass(URLClassLoaderEx.java:49)
        at com.documentum.fc.client.impl.bof.classmgmt.DelayedDelegationClassLoader.findClass(DelayedDelegationClassLoader.java:241)
        at com.documentum.fc.client.impl.bof.classmgmt.AbstractTransformingClassLoader.findClass(AbstractTransformingClassLoader.java:122)
        at com.documentum.fc.client.impl.bof.classmgmt.DelayedDelegationClassLoader.loadClass(DelayedDelegationClassLoader.java:147)
        at com.documentum.fc.client.impl.bof.classmgmt.AbstractTransformingClassLoader.loadClass(AbstractTransformingClassLoader.java:69)
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.loadModuleClass(ModuleManager.java:254)
        ... 25 more

IDfQueryEvent(ERROR, UNREACHABLE): [REPO2] returned [Unable to process query] at [2020-03-03 10:31:01:281 +0000]
, Query Status: 6
2020-03-03 10:31:01,291 UTC [ERROR] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.common.dctm.queries.D2QueryBuilder    : The search has failed. null
2020-03-03 10:31:01,307 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : Executing xPlore search ended : 6.722s
2020-03-03 10:31:01,307 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : Enter buildItems
2020-03-03 10:31:01,308 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : FACETS: ObjectID = 080f2345602f4a20
2020-03-03 10:31:01,310 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : listColNames=[object_name, score, title, a_status, r_modify_date, r_modifier]
2020-03-03 10:31:01,311 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : Exit buildItems
2020-03-03 10:31:01,311 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : FACETS: leaving getContent
2020-03-03 10:31:01,354 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : getSearchContent - start building facets
2020-03-03 10:31:01,355 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : Exit buildFacets
2020-03-03 10:31:01,355 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : Query name = lastSearch
2020-03-03 10:31:01,363 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : Enter getObjectName
2020-03-03 10:31:01,364 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : Exit getObjectName
2020-03-03 10:31:01,364 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : FACETS: attrNameList from query = []
2020-03-03 10:31:01,364 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : FACETS: attrValueList from query = []
2020-03-03 10:31:01,365 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : searchTypes = [dm_document]
2020-03-03 10:31:01,365 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : getSearchContent - done building facets
2020-03-03 10:31:01,365 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.content.D2cQueryContent     : Exit getSearchContent
2020-03-03 10:31:01,366 UTC [ERROR] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - c.emc.d2fs.dctm.aspects.InjectSessionAspect   : {}
com.documentum.fc.common.DfException: The search has failed.
[DM_VEL_INSTANTIATION_ERROR]
        at com.emc.d2fs.dctm.content.D2cQueryContent.getSearchContent(D2cQueryContent.java:598)
        at com.emc.d2fs.dctm.content.NodeLastSearchContent.getSearchContent(NodeLastSearchContent.java:217)
        at com.emc.d2fs.dctm.web.services.content.D2ContentService.getContent(D2ContentService.java:391)
        at com.emc.d2fs.dctm.web.services.content.D2ContentService.getSearchContent_aroundBody14(D2ContentService.java:425)
        at com.emc.d2fs.dctm.web.services.content.D2ContentService$AjcClosure15.run(D2ContentService.java:1)
        at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:229)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.process(InjectSessionAspect.java:240)
        at com.emc.d2fs.dctm.web.services.content.D2ContentService.getSearchContent(D2ContentService.java:403)
        at com.emc.x3.client.services.search.RpcSearchManagerServiceImpl.getSearchResults(RpcSearchManagerServiceImpl.java:37)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:587)
        at com.emc.x3.server.GuiceRemoteServiceServlet.processCall(GuiceRemoteServiceServlet.java:105)
        at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:373)
        at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263)
        at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178)
        at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62)
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
        at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
        at com.custom.d2.auth.filters.NonSSOAuthenticationFilter.executeChain(NonSSOAuthenticationFilter.java:33)
        at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
        at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
        at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
        at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
        at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
        at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:387)
        at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:168)
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:168)
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
        at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
        at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:40)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3706)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3672)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:328)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
2020-03-03 10:31:01,366 UTC [DEBUG] ([ACTIVE] ExecuteThread: '76' for queue: 'weblogic.kernel.Default (self-tuning)') - com.emc.d2fs.dctm.web.services.D2fsContext    : Release session : s1

 

As you can see above, D2 complains about instantiation of a specific class (“com.documentum.services.complexobjects.impl.ComplexObjectMappingDefImpl”). This class is part of an SBO bundled in the MessagingApp.dar as mentioned on KB6289567 & KB6296577.

 

Therefore, the DAR installation must have failed, right? Well it didn’t and that’s the strange thing I was talking about… I have the evidences of the proper installation of the MessagingApp.dar inside the repository the day before:

[INFO]  ******************************************************
[INFO]  * Headless Composer
[INFO]  * Version:        16.4.000.0042
[INFO]  * Java version:   1.8.0_152 (64bit)
[INFO]  * Java home:      $JAVA_HOME/jre
[INFO]  * Set storage type: false
[INFO]  *
[INFO]  * DAR file:       $DOCUMENTUM/product/16.4/install/DARsInternal/MessagingApp.dar
[INFO]  * Project name:   MessagingApp
[INFO]  * Built by Composer: 7.1.0000.0186
[INFO]  *
[INFO]  * Repository:     REPO2
[INFO]  * Server version: 16.4.0170.0234  Linux64.Oracle
[INFO]  * User name:      dmadmin
[INFO]  ******************************************************
[INFO]  Install started...  Mon Mar 02 22:18:00 UTC 2020
[INFO]  Executing pre-install script
[INFO]  Pre-install script executed successfully Mon Mar 02 22:18:00 UTC 2020
...
[INFO]  Done Overwriting object : 'com.documentum.services.complexobjects.impl.ComplexObjectMappingDefImpl'(dmc_module 0b0f2345600008f9)
...
[INFO]  Done Versioning object : 'MessagingApp'(dmc_dar 080f2345608608e9)
...
[INFO]  Finished executing post-install actions Mon Mar 02 22:18:30 UTC 2020
[INFO]  Finished executing post-install script Mon Mar 02 22:18:32 UTC 2020
[INFO]  Project 'MessagingApp' was successfully installed.

 

There are absolutely no errors and it shows that the missing class “com.documentum.services.complexobjects.impl.ComplexObjectMappingDefImpl” was upgraded properly but on D2, it doesn’t work (it was properly installed on both the Global Registry and the Repository used for the search). Re-installing again the DAR file produced exactly the same log file: 100% the same except for the date, obviously. After the re-installation of the DAR, the issue was magically gone. For this issue honestly, I’m still amazed how this can be possible and I’m pretty sure I will never find any reason.

 

III. MailApp

Finally, the last issue is with the MailApp dar file. That’s the one which had the most occurrences as far as I could see. During an upgrade from 7.3 to 16.4 P17, the dar installation failed and the following was shown inside the “dars.log” file:

[INFO]  ******************************************************
[INFO]  * Headless Composer
[INFO]  * Version:        16.4.000.0042
[INFO]  * Java version:   1.8.0_152 (64bit)
[INFO]  * Java home:      $JAVA_HOME/jre
[INFO]  * Set storage type: false
[INFO]  *
[INFO]  * DAR file:       $DOCUMENTUM/product/16.4/install/DARsInternal/MailApp.dar
[INFO]  * Project name:   MailApp
[INFO]  * Built by Composer: 7.1.0000.0186
[INFO]  *
[INFO]  * Repository:     REPO3
[INFO]  * Server version: 16.4.0170.0234  Linux64.Oracle
[INFO]  * User name:      dmadmin
[INFO]  ******************************************************
[INFO]  Install started...  Thu Mar 12 10:08:27 UTC 2020
[INFO]  Executing pre-install script
[INFO]  dmbasic.exe output : connecting docbase...REPO3
[INFO]  dmbasic.exe output : dm_attachment_folder type exists
[INFO]  dmbasic.exe output : Relation type 'dm_attachments_relation' already exists
[INFO]  dmbasic.exe output : Disconnect from the docbase.
[INFO]  Pre-install script executed successfully Thu Mar 12 10:08:31 UTC 2020
[WARN]  Cannot retrieve object by Object Id. This may happen if an object previously installed by Composer was deleted. Object reference will be returned as null. OID: 0b0f345670000df1, URN: urnd:com.emc.ide.artifact.moduledef/com.documentum.mailapp.operations.DfPreProcessMessageObject?artifactURI=file:/C:/Source/.../com.documentum.mailapp.operations.dfpreprocessmessageobject.module#//@dataModel/@externalInterfaces
[WARN]  Cannot retrieve object by Object Id. This may happen if an object previously installed by Composer was deleted. Object reference will be returned as null. OID: 0b0f345670000fe9, URN: urnd:com.emc.ide.artifact.aspectmoduledef/dm_attachmentfolder_aspect?artifactURI=file:/C:/Source/.../dm_attachmentfolder_aspect.module#//@dataModel/@miscellaneous
[WARN]  Cannot retrieve object by Object Id. This may happen if an object previously installed by Composer was deleted. Object reference will be returned as null. OID: 090f345670000e37, URN: urnd:com.emc.ide.artifact.jardef.jardef/attachmentfolderaspect.jar?artifactURI=file:/C:/source/.../attachmentfolderaspect.jar%5B1%5D.jardef#//@dataModel
[WARN]  Cannot retrieve object by Object Id. This may happen if an object previously installed by Composer was deleted. Object reference will be returned as null. OID: 080f345670000e41, URN: urnd:com.emc.ide.artifact.moduledef/com.message.aspose?artifactURI=file:/C:/Users/.../com.message.aspose.module#//@dataModel/@runtimeEnvironmentXML
[ERROR]  A module 'dm_attachmentfolder_aspect' already exists under folder 'Aspect'.
[ERROR]  A module 'mdmo_message_aspect' already exists under folder 'Aspect'.
[WARN]  superTypeName is null. This might happen if the dependent project is not Installed in the same ANT build invocation
[ERROR]  A module 'com.documentum.mailapp.operations.DfPreProcessMessageObject' already exists under folder 'Operations'.
[ERROR]  A module 'com.documentum.mailapp.operations.inbound.DfCleanUpLocalMailAppFiles' already exists under folder 'Operations'.
[ERROR]  A module 'com.documentum.mailapp.operations.inbound.DfFixUpAttachments' already exists under folder 'Operations'.
[ERROR]  A module 'com.documentum.mailapp.operations.inbound.DfImportMailObject' already exists under folder 'Operations'.
[ERROR]  A module 'com.documentum.mailapp.operations.inbound.DfSeparateAttachments' already exists under folder 'Operations'.
[ERROR]  A module 'aspose' already exists under folder 'Modules'.
[ERROR]  A module 'mailappconfig' already exists under folder 'Modules'.
[INFO]  MailApp install aborted by user.

 

On another migration with a source that is 7.2 this time and a target that is 16.4 P20, we had another batch of issues. On 7.2, the MailApp didn’t exist (as far as I know), so the upgrade is supposed to install for the first time this DAR but it fails because some of the pieces already exists. If you look at the logs above, the same type existed already as well but above, it just continued without any problem the “Pre-install” script (line 19, 22 above // line 20, 22 below). Below, it fails on already existing types and in both cases [above for 7.3 and below for 7.2], the flag “preserve_existing_types” is set to “T” (True) in the server.ini of all repositories so it doesn’t make much sense that there is a difference in behavior… However, that’s how it is so if you have any explanation, feel free to share! I asked OpenText to look into it but nothing came out of it so far. Anyway, so here are the logs on the 7.2 repository:

[INFO]  ******************************************************
[INFO]  * Headless Composer
[INFO]  * Version:        16.4.000.0042
[INFO]  * Java version:   1.8.0_152 (64bit)
[INFO]  * Java home:      $JAVA_HOME/jre
[INFO]  * Set storage type: false
[INFO]  *
[INFO]  * DAR file:       $DOCUMENTUM/product/16.4/install/DARsInternal/MailApp.dar
[INFO]  * Project name:   MailApp
[INFO]  * Built by Composer: 7.1.0000.0186
[INFO]  *
[INFO]  * Repository:     REPO4
[INFO]  * Server version: 16.4.0200.0256  Linux64.Oracle
[INFO]  * User name:      dmadmin
[INFO]  ******************************************************
[INFO]  Install started...  Fri Apr 03 08:32:45 UTC 2020
[INFO]  Executing pre-install script
[INFO]  dmbasic.exe output : connecting docbase...REPO4
[INFO]  dmbasic.exe output : Create dm_state_extension type.
[INFO]  dmbasic.exe output : [DM_QUERY_E_CREATE_FAILED]error:  "CREATE TYPE statement failed for type: dm_attachment_folder."
[INFO]  dmbasic.exe output :
[INFO]  dmbasic.exe output : [DM_TYPE_MGR_E_EXISTING_TABLE]error:  "Cannot create type dm_attachment_folder because the table dm_attachment_folder_s unexpectedly already exists in the database and the server 'preserve_existing_types' flag is enabled.  To complete this operation the table must first be manually dropped or the server flag disabled."
[INFO]  dmbasic.exe output :
[INFO]  dmbasic.exe output :
[INFO]  dmbasic.exe output : Failed to create dm_attachment_folder type
[ERROR]  Procedure execution failed with dmbasic.exe exit value : 255
[INFO]  MailApp install failed.
[ERROR]  Unable to install dar file $DOCUMENTUM/product/16.4/install/DARsInternal/MailApp.dar
com.emc.ide.installer.PreInstallException: Error running pre-install procedure "presetup". Please contact the procedure owner to verify if it is functioning properly. Please also check if the JAVA_HOME is pointing to the correct JDK. In case of multiple installed JDK's, please provide -vm <JDK>bin flag in the composer.ini/dardeployer.ini files
        at internal.com.emc.ide.installer.DarInstaller.preInstall(DarInstaller.java:1085)
        at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:495)
        at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:334)
        at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:303)
        at com.emc.ide.installer.util.IDarInstallerHelper.doInPlaceInstall(IDarInstallerHelper.java:127)
        at com.emc.ant.installer.api.InstallerAntTask.installDar(InstallerAntTask.java:258)
        at com.emc.ant.installer.api.InstallerAntTask.execute(InstallerAntTask.java:135)
        at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
        at org.apache.tools.ant.Task.perform(Task.java:348)
        at org.apache.tools.ant.Target.execute(Target.java:392)
        at org.apache.tools.ant.Target.performTasks(Target.java:413)
        at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
        at org.apache.tools.ant.Project.executeTarget(Project.java:1368)
        at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
        at org.eclipse.ant.internal.core.ant.EclipseDefaultExecutor.executeTargets(EclipseDefaultExecutor.java:32)
        at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
        at org.eclipse.ant.internal.core.ant.InternalAntRunner.run(InternalAntRunner.java:672)
        at org.eclipse.ant.internal.core.ant.InternalAntRunner.run(InternalAntRunner.java:537)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.eclipse.ant.core.AntRunner.run(AntRunner.java:513)
        at org.eclipse.ant.core.AntRunner.start(AntRunner.java:600)
        at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
        at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
        at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629)
        at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
        at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
        at org.eclipse.equinox.launcher.Main.main(Main.java:1414)
        at org.eclipse.core.launcher.Main.main(Main.java:34)
Caused by: com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerException: Procedure execution failed with dmbasic.exe exit value : 255
        at com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerUtils.executeDmBasic(ProcedureRunnerUtils.java:283)
        at com.emc.ide.external.dfc.procedurerunner.ProcedureRunner.execute(ProcedureRunner.java:55)
        at internal.com.emc.ide.installer.DarInstaller.preInstall(DarInstaller.java:1080)
        ... 42 more
[ERROR]  Failed to install DAR
Unable to install dar file $DOCUMENTUM/product/16.4/install/DARsInternal/MailApp.dar
        at com.emc.ant.installer.api.InstallerAntTask.installDar(InstallerAntTask.java:273)
        at com.emc.ant.installer.api.InstallerAntTask.execute(InstallerAntTask.java:135)
        at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
        at org.apache.tools.ant.Task.perform(Task.java:348)
        at org.apache.tools.ant.Target.execute(Target.java:392)
        at org.apache.tools.ant.Target.performTasks(Target.java:413)
        at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
        at org.apache.tools.ant.Project.executeTarget(Project.java:1368)
        at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
        at org.eclipse.ant.internal.core.ant.EclipseDefaultExecutor.executeTargets(EclipseDefaultExecutor.java:32)
        at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
        at org.eclipse.ant.internal.core.ant.InternalAntRunner.run(InternalAntRunner.java:672)
        at org.eclipse.ant.internal.core.ant.InternalAntRunner.run(InternalAntRunner.java:537)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.eclipse.ant.core.AntRunner.run(AntRunner.java:513)
        at org.eclipse.ant.core.AntRunner.start(AntRunner.java:600)
        at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
        at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)
        at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353)
        at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629)
        at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
        at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
        at org.eclipse.equinox.launcher.Main.main(Main.java:1414)
        at org.eclipse.core.launcher.Main.main(Main.java:34)
Caused by: com.emc.ide.installer.PreInstallException: Error running pre-install procedure "presetup". Please contact the procedure owner to verify if it is functioning properly. Please also check if the JAVA_HOME is pointing to the correct JDK. In case of multiple installed JDK's, please provide -vm <JDK>bin flag in the composer.ini/dardeployer.ini files
        at internal.com.emc.ide.installer.DarInstaller.preInstall(DarInstaller.java:1085)
        at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:495)
        at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:334)
        at internal.com.emc.ide.installer.DarInstaller.doInstall(DarInstaller.java:303)
        at com.emc.ide.installer.util.IDarInstallerHelper.doInPlaceInstall(IDarInstallerHelper.java:127)
        at com.emc.ant.installer.api.InstallerAntTask.installDar(InstallerAntTask.java:258)
        ... 37 more
Caused by: com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerException: Procedure execution failed with dmbasic.exe exit value : 255
        at com.emc.ide.external.dfc.procedurerunner.ProcedureRunnerUtils.executeDmBasic(ProcedureRunnerUtils.java:283)
        at com.emc.ide.external.dfc.procedurerunner.ProcedureRunner.execute(ProcedureRunner.java:55)
        at internal.com.emc.ide.installer.DarInstaller.preInstall(DarInstaller.java:1080)
        ... 42 more

 

If you have the above errors, you can just set the “preserve_existing_types” flag to “F” (False), then start again the DAR installation and it should be installing properly this time. Please take care with this flag! If you are copying the repository, it must be set to “T” (True) otherwise it will most likely cause you big troubles… But for in-place upgrade, you can and should set it to “F” (False) before starting the repository upgrade and switch it back to “T” (True) once the upgrade is completed and all DARs have been installed. So make sure you do that and the number of issues during DAR installations should decrease drastically.

 

Anyway, all that to say that there are some best practices to apply to upgrade, even if it’s not documented anywhere. In addition, you should be careful about the DARs installation logs and really test your application because even when everything seems to went well, you might not be completely safe… Where would be the fun if you could rely on deterministic systems? 😉

 

Cet article Documentum Upgrade – Missing DARs after upgrade est apparu en premier sur Blog dbi services.

Installing EDB Advanced Server without EDB Repository

$
0
0

Almost all of us, who already installed Enterprise DB Tools on a Linux server know the procedure. You need to add the EDB Repository to your server or your Red Hat Satellite configuration and after that you can easily install the tools you need. But what happens, if you are not able to add the repository?
The answer is really simple: Download the rpms and install them directly on the server.

Download the EDB tar ball

On the repository of EDB you can find the possibility to download the tar-ball which contains all the packages needed to install EDB products. The tar ball is about 2 GB size, so there should be space for it somewhere.

Just scroll a bit down on the repo page

In case you don’t want to download the whole tar ball, there is also the possibility to download the single rpms. But as I want to have everything, I go on with the tar ball.

Downloading the tar ball directly. Of course you need your subscription credentials here as well.

 wget https://username:password@yum.enterprisedb.com/edb/redhat/edb_redhat_rhel-7-x86_64.tar.gz
--2020-11-19 09:52:34--  https://username:*password*@yum.enterprisedb.com/edb/redhat/edb_redhat_rhel-7-x86_64.tar.gz
Resolving yum.enterprisedb.com (yum.enterprisedb.com)... 54.165.250.135
Connecting to yum.enterprisedb.com (yum.enterprisedb.com)|54.165.250.135|:443... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Reusing existing connection to yum.enterprisedb.com:443.
HTTP request sent, awaiting response... 200 OK
Length: 2052730909 (1.9G) [application/x-gzip]
Saving to: ‘edb_redhat_rhel-7-x86_64.tar.gz’

10% [==============                                                               ] 215,687,168 2.40MB/s  eta 21m 51s

Extract the needed packages

As you we want to install the EDB Advanced Server and EFM, we need to find out which packages are available in the tar ball. As the output is quite long, let’s write it into a file for better readability.

tar -tf edb_redhat_rhel-7-x86_64.tar.gz > edb_rhel_packages.txt

After that we can search for the files we need using vi and afterwards extract them. Find below the complete list of packages you need for the installation of EDB Advanced Server.

tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-13.0-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-edbplus-39.0.0-1.rhel7.x86_64.rpm  
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-libicu-66.1-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-pgagent-4.2.0-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-pgpool41-extensions-4.1.2-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-pgsnmpd-1.0-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-client-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-cloneschema-1.14-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-contrib-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-core-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-devel-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-docs-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-edb-modules-1.0-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-indexadvisor-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-libs-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-llvmjit-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-parallel-clone-1.8-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-pldebugger-1.1-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-plperl-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-plpython3-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-pltcl-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-sqlprofiler-4.0-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-sqlprotect-13.0.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-server-sslutils-1.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-slony-replication-2.2.8-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-slony-replication-core-2.2.8-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-slony-replication-docs-2.2.8-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-as13-slony-replication-tools-2.2.8-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-jdbc-42.2.12.3-1.rhel7.x86_64.rpm
tar -xf edb_redhat_rhel-7-x86_64.tar.gz edb-pgpool41-libs-4.1.2-1.rhel7.x86_64.rpm

Install EDB Advanced Server

Now that all needed packages are downloaded, go on with the installation. First we have to install the packages needed from the RHEL and EPEL Repository.

sudo yum install -y unzip xorg-x11-xauth screen boost-atomic boost-chrono boost-date-time boost-filesystem boost-system boost-regex boost-thread tcl uuid python3 python3-libs python3-pip python3-setuptools libtirpc libicu libxslt llvm5.0 llvm5.0-libs libtirpc lm_sensors-libs net-snmp-agent-libs net-snmp-libs perl-Compress-Raw-Bzip2 perl-Compress-Raw-Zlib perl-DBD-Pg perl-DBI perl-Data-Dumper perl-IO-Compress perl-Net-Daemon perl-PlRPC perl-version perl-Switch

Once we installed all the dependencies needed, we can finally install the Advanced Server using the rpms.

sudo yum install /tmp/edb-*.rpm      

In case one rpm is missing, you will get a hint, that one rpm needs another as dependency. You can just extract this rpm from the tar ball as well and retry the installation command.

Finished. That’s all you have to do, quite simple, isn’t it.
Now you can go on with the setup of the database. In case you need help in that point, you can find some hints here.

Cet article Installing EDB Advanced Server without EDB Repository est apparu en premier sur Blog dbi services.

SQL Server – PolyBase Services when listening on all IP is disabled

$
0
0

Introduction

You will find a lot of blogs explaining how to install the PolyBase feature with a SQL Server database instance. You will also learn how to configure it and how to use it in these blogs. But I could not find any solution for one case I was faced to.

When you have several SQL Server instances on a server you will either allocate a fix port per instance or a dedicated IP per instance, or probably both.

Of course if you define a dedicated IP address to your SQL Server instance you will automatically disable the TCP/IP Listen All property and you will set a fix port to this specific IP address.

Doing so, the best practice is to remove the fix port in the IPAll Section of the IP Address configuration.

These are the best practices that every DBA is applying considering the security policies.

What about Polybase services

When installing the PolyBase feature, you have probably noticed the 2 new services running on your server
– SQL Server PolyBase Data Mouvement
– SQL Server PolyBase Engine

Now that your standard configuration is set, you have to restart your services.
Restarting your SQL Server Service will automatically restart the services that are depending on it
– SQL Server Agent
– SQL Server PolyBase Data Mouvement
– SQL Server PolyBase Engine

When the configuration set above will apply, you will have the surprise that the 2 PolyBase services won’t start anymore.

Annoying, especially if the security policy prevent me to enable listening on all IP Addresses.

Here is the trick

In fact the fix is quite simple, I found it after trying multiple setting possibilities.

It seems that the PolyBase services are only checking the port in the IPAll Section and not the one set on the IP address you enabled.
Therefore if you let the IP address Listen All property to “No” and setting the fix port in the IPAll section to the same port set on the dedicated IP address of your SQL Server instance, the PolyBase services will be able to start again

Conlusion

With this configuration you won’t break you policies and best practices using PolyBase feature.

Cet article SQL Server – PolyBase Services when listening on all IP is disabled est apparu en premier sur Blog dbi services.

SQL Server 2019: What’s new in sp_configure and sys.configurations options?

$
0
0

SQL Server 2019 added new options in sp_configure and sys.configurations.
First, how can we find the difference between these SQL Server Versions.
This simple query will give us the number of options and the SQL Server Version:

select count(*),@@version FROM sys.configurations

In SQL server 2016, we have 74 parameters for the instance configuration:

In SQL server 2017, we have 77 parameters for the instance configuration:

In SQL server 2019, we have 84 parameters for the instance configuration:

In SQL Server 2019 we have 7 more parameters than in SQL Server 2017.

In detail, we see that one parameter has been removed and 8 added:

    • The parameter removed is with the ID 1577 named “common criteria compliance enabled”. More detail here
    • The 8 new options are:
      • ID 1588 named “column encryption enclave type”
      • ID 1589 named “tempdb metadata memory-optimized”
      • ID 1591 named “ADR cleaner retry timeout (min) ”
      • ID 1592 named “ADR Preallocation Factor”
      • ID 1593 named “version high part of SQL Server”
      • ID 1594 named “version low part of SQL Server”
      • ID 16398 named “allow filesystem enumeration”
      • ID 16399 named “polybase enabled”

After we identify the new parameters, we will go a step forward with the configuration with this query:

select * FROM sys.configurations where configuration_id in (1588,1589,1591,1592,1593,1594,16398,16399)

We can see that only one value has 1 and not 0 by default, the parameter “allow filesystem enumeration”.
The 2 others interesting columns is “is_dynamic” and “is_advanced”:

  • When “is_dynamic” is set to 1, the parameter need a RECONFIGURE to be activate.
  • When “is_advanced” is set to1, the parameter is in the advanced configuration.

I will not explain or test the new parameter in this article.
It’s just to give you a view of the new SQL Server 2019 Instance configuration options.

Cet article SQL Server 2019: What’s new in sp_configure and sys.configurations options? est apparu en premier sur Blog dbi services.


Kubernetes : two different OCI runtimes

$
0
0

Quick recap : in the previous episode, we have seen how to move from a Kubernetes cluster powered by Docker into a future-proof Kubernetes cluster, using containerd. Now let’s move on and fully enjoy the power of having this CRI intermediate layer sitting below the kubelet.

containerd and OCI runtimes

containerd is a high level container runtime. It is CRI compliant, CRI being the API from Kubernetes for container runtimes.
Here below a good picture of the overall containerd architecture :

Source : https://containerd.io/

 

By design, containerd is capable of addressing multiple OCI runtimes in parallel. Even if containerd is bundled with runc, acting as the default implementation of OCI runtimes, we can configure containerd to refer to another implementation, such as Kata Containers.

Why using two different OCI runtimes ? the answer is simple : each implementation got its own pro and cons.

runc is the defacto standard of OCI runtimes. Installed by default yith containerd, it is also used on machines where Docker is setup. It will also rely on host’s kernel features, such as cgroup and namespaces, the basis of containers. It’s obvious, but all containers running and using runc will use the same kernel, and also the same kernel as the host.

Kata Containers on the other hand, still an OCI implementation, is a kind of hybrid between what we can call standard containers and classic VMs. Each container spinned using Kata Containers will get its own dedicated Kernel with additional security and isolation; it will run in a dedicated micro VM. It is capable of using hardware extensions like the virtualization VT extensions.

Setup of Kata Containers on a Kubernetes cluster.

In this setup, we are going to use the same kind of cluster setup of the last episode. A Kubernetes cluster running the 1.20.7 release, with one master and two workers. As we are going to install a new OCI container runtime, this setup needs to be done on each worker node of the cluster. But also on master(s) if you plan to run pods powered by Kata Containers on the master.

Let’s start with a worker.

Pick the last release, from their Github website, https://github.com/kata-containers/kata-containers/releases and extract the archive on your node.

root@worker2:~# wget https://github.com/kata-containers/kata-containers/releases/download/2.2.0/kata-static-2.2.0-x86_64.tar.xz
--2021-09-10 16:27:59--  https://github.com/kata-containers/kata-containers/releases/download/2.2.0/kata-static-2.2.0-x86_64.tar.xz
Resolving github.com (github.com)... 140.82.121.4
Connecting to github.com (github.com)|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-releases.githubusercontent.com/113404957/129c220a-812b-4d75-a5e3-e988c988c306?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210910%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210910T162759Z&X-Amz-Expires=300&X-Amz-Signature=7d0032e7353127cd725c9f61e1136177c29702ded6c636cdb076783fbf0ce0b4&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=113404957&response-content-disposition=attachment%3B%20filename%3Dkata-static-2.2.0-x86_64.tar.xz&response-content-type=application%2Foctet-stream [following]
--2021-09-10 16:27:59--  https://github-releases.githubusercontent.com/113404957/129c220a-812b-4d75-a5e3-e988c988c306?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210910%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210910T162759Z&X-Amz-Expires=300&X-Amz-Signature=7d0032e7353127cd725c9f61e1136177c29702ded6c636cdb076783fbf0ce0b4&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=113404957&response-content-disposition=attachment%3B%20filename%3Dkata-static-2.2.0-x86_64.tar.xz&response-content-type=application%2Foctet-stream
Resolving github-releases.githubusercontent.com (github-releases.githubusercontent.com)... 185.199.108.154, 185.199.111.154, 185.199.109.154, ...
Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.108.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 92535904 (88M) [application/octet-stream]
Saving to: ‘kata-static-2.2.0-x86_64.tar.xz’

kata-static-2.2.0-x86_64.tar.xz                              100%[==============================================================================================================================================>]  88.25M  28.4MB/s    in 3.1s    

2021-09-10 16:28:03 (28.4 MB/s) - ‘kata-static-2.2.0-x86_64.tar.xz’ saved [92535904/92535904]

root@worker2:~# xz -d kata-static-2.2.0-x86_64.tar.xz
root@worker2:~# tar -xvf kata-static-2.2.0-x86_64.tar
./
./opt/kata/
./opt/kata/bin/
...
root@worker2:~#

You noticed : all archive extracted into a ../opt/kata folder.

Move the extracted folder into the corresponding node /opt folder. As this folder is not part of the PATH, a symbolic link is needed if you want to keep the containerd configuration file simple, with no folder references.

root@worker2:~# mv opt/kata/ /opt/
root@worker2:~# ln -s /opt/kata/bin/kata-runtime /usr/local/bin/kata-runtime
root@worker2:~# ln -s /opt/kata/bin/containerd-shim-kata-v2 /usr/local/bin/containerd-shim-kata-v2
root@worker2:~#

Let’s check if what we installed is working correctly.

root@worker2:~# kata-runtime --version
kata-runtime : 2.2.0
commit : <<unknown>>
OCI specs: 1.0.2-dev
root@worker2:~# kata-runtime check
WARN[0000] Not running network checks as super user arch=amd64 name=kata-runtime pid=61809 source=runtime
System is capable of running Kata Containers
System can currently create Kata Containers

Now we need to update the containerd configuration file, and add the proper reference to the new OCI runtime installed.

Modify the config.toml in folder /etc/containerd so that you have the lines 3 and 4 below added to your file (pay attention, nested lines !)

root@worker2:~# cat /etc/containerd/config.toml |grep ".containerd.runtimes]" -A 4
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]
          runtime_type = "io.containerd.kata.v2"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"

Once updated, we are now ready to restart the contained service.

root@worker2:~# systemctl restart containerd.service 
root@worker2:~# systemctl status containerd.service 
● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-09-10 16:59:39 UTC; 4s ago
       Docs: https://containerd.io
    Process: 72140 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 72141 (containerd)
      Tasks: 103
...

Repeat those steps on other nodes, as many times as required by your infrastructure.

RuntimeClass in Kubernetes

Now we have our Kubernetes cluster with containerd, setup to run either runc containers or Kata Containers. But we miss one thing : how to specify in our workload which kind of OCI runtime we would like to address ?

Let’s take a look at the RuntimeClass resource of Kubernetes. Graduated to Stable in release 1.20, the RuntimeClass resource is there to specify which kind of container runtime implementation you want to use. So, we need to create one for each container runtime. Once defined properly, this resource is used in conjunction with a pod definition.

Creation of a RuntimeClass is pretty straight forward (notice that RuntimeClass are not namespaced). You need to specify a name and a handler. This handler can be found on the containerd configuration file, this is the last part of the key name :

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]

In our cluster, handlers are “kata” and “runc”.

dbi@master:~$ cat runtimeclass-kata.yml 
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: kata-rc
handler: kata
dbi@master:~$ kubectl apply -f runtimeclass-kata.yml 
runtimeclass.node.k8s.io/kata-rc created

Same for the RunC RuntimeClass :

dbi@master:~$ cat runtimeclass-runc.yml 
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: runc-rc
handler: runc
dbi@master:~$ kubectl apply -f runtimeclass-runc.yml 
runtimeclass.node.k8s.io/runc-rc created
dbi@master:~$ kubectl get runtimeclass
NAME HANDLER AGE
kata-rc kata 78s
runc-rc runc 2d8h

Once RuntimeClass resources created, let’s spin up our pods. The workload itself is not that important in this context. We will focus on the easy and ready-to-go image, the popular web-server Nginx.

We will deploy two pods, one for each container runtime. For runc container runtime :

dbi@master:~$ cat nginx-on-runc.yml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-on-runc
spec:
  runtimeClassName: runc-rc
  containers:
  - name: nginx
    image: nginx

dbi@master:~$ kubectl apply -f nginx-on-runc.yml 
pod/nginx-on-runc created

Then, for the Kata Containers runtime :

dbi@master:~$ cat nginx-on-kata.yml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-on-kata
spec:
  runtimeClassName: kata-rc
  containers:
  - name: nginx
    image: nginx
dbi@master:~$ kubectl apply -f nginx-on-kata.yml 
pod/nginx-on-kata created

Always some checks ! Check that our two pods are running.

dbi@master:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-on-kata 1/1 Running 0 10m 172.16.235.157 worker1 <none> <none>
nginx-on-runc 1/1 Running 0 13m 172.16.189.87 worker2 <none> <none>

We have two pods running Nginx on our cluster, providing same service. But what are the difference ? As said earlier, you should get an isolation of kernel and for sure, difference in versions between the kernel inside your pod and the one on your worker.

Also, the worker running Nginx on Kata Containers will contain some references to QEMU processes running. By default, Kata Containers will use host kernel module KVM and QEMU for the virtualization section.

Print system information of each worker by initiating the uname command :

dbi@worker1:~$ uname -a
Linux worker1 5.4.0-84-generic #94-Ubuntu SMP Thu Aug 26 20:27:37 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
dbi@worker2:~$ uname -a
Linux worker2 5.4.0-84-generic #94-Ubuntu SMP Thu Aug 26 20:27:37 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

And then, same system information for each nginx container :

dbi@master:~$ kubectl exec -it nginx-on-runc -- bash
root@nginx-on-runc:/# uname -a
Linux nginx-on-runc 5.4.0-84-generic #94-Ubuntu SMP Thu Aug 26 20:27:37 UTC 2021 x86_64 GNU/Linux
dbi@master:~$ kubectl exec -it nginx-on-kata -- bash
root@nginx-on-kata:/# uname -a
Linux nginx-on-kata 5.10.25 #2 SMP Tue Aug 31 22:48:37 UTC 2021 x86_64 GNU/Linux

Here we are 🙂 the running container using Kata Containers is having a different version (and more recent !).

dbi@worker1:~$ ps -ef |grep qemu
root 24617 24609 0 17:44 ? 00:00:00 /opt/kata/libexec/kata-qemu/virtiofsd --syslog -o cache=auto -o no_posix_lock -o source=/run/kata-containers/shared/sandboxes/886de7e8561b601a3c0a2af05b85beb9afaa8c2ac376b61c15a1a71bf49093e6/shared --fd=3 -f --thread-pool-size=1
root 24623 1 0 17:44 ? 00:00:14 /opt/kata/bin/qemu-system-x86_64 -name sandbox-886de7e8561b601a3c0a2af05b85beb9afaa8c2ac376b61c15a1a71bf49093e6 -uuid 56bd770d-ce6f-4313-8ac1-9429d0bf7a19 -machine q35,accel=kvm,kernel_irqchip=on,nvdimm=on -cpu host,pmu=off -qmp unix:/run/vc/vm/886de7e8561b601a3c0a2af05b85beb9afaa8c2ac376b61c15a1a71bf49093e6/qmp.sock,server=on,wait=off -m 2048M,slots=10,maxmem=3011M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2 -device virtio-serial-pci,disable-modern=true,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/886de7e8561b601a3c0a2af05b85beb9afaa8c2ac376b61c15a1a71bf49093e6/console.sock,server=on,wait=off -device nvdimm,id=nv0,memdev=mem0,unarmed=on -object memory-backend-file,id=mem0,mem-path=/opt/kata/share/kata-containers/kata-clearlinux-latest.image,size=268435456,readonly=on -device virtio-scsi-pci,id=scsi0,disable-modern=true -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device vhost-vsock-pci,disable-modern=true,vhostfd=3,id=vsock-1696826128,guest-cid=1696826128 -chardev socket,id=char-6546ada211d5c7ec,path=/run/vc/vm/886de7e8561b601a3c0a2af05b85beb9afaa8c2ac376b61c15a1a71bf49093e6/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-6546ada211d5c7ec,tag=kataShared -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=62:2c:d8:29:1c:f3,disable-modern=true,mq=on,vectors=4 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /opt/kata/share/kata-containers/vmlinux-5.10.25-85 -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 quiet systemd.show_status=false panic=1 nr_cpus=2 systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none -pidfile /run/vc/vm/886de7e8561b601a3c0a2af05b85beb9afaa8c2ac376b61c15a1a71bf49093e6/pid -smp 1,cores=1,threads=1,sockets=2,maxcpus=2
root 24628 24617 0 17:44 ? 00:00:03 /opt/kata/libexec/kata-qemu/virtiofsd --syslog -o cache=auto -o no_posix_lock -o source=/run/kata-containers/shared/sandboxes/886de7e8561b601a3c0a2af05b85beb9afaa8c2ac376b61c15a1a71bf49093e6/shared --fd=3 -f --thread-pool-size=1
dbi 44240 42405 0 18:11 pts/0 00:00:00 grep --color=auto qemu

Kubernetes is a very powerful system, with lots of functionnalies, but very flexible. Its architecture, design and the way you configure it, allows it to adapt to variours needs, topologies or requirements.

 

 

 

 

Cet article Kubernetes : two different OCI runtimes est apparu en premier sur Blog dbi services.

Monitoring – first steps with Prometheus

$
0
0

Monitoring is a crucial element of DevOps automation that has to be proactive, meaning it must find ways to improve applications quality before bugs appear.
Therefore, we have to quickly and easily understand what is happening in our infrastructure, which is often more and more complex and constantly evolving.
In this context, one tool stands out: Prometheus.
Prometheus is an open-source, metrics-based monitoring system and alerting tool. It collects data about applications and systems in a TSDB and allows you to visualise the data and much more.
Of course, Prometheus is far from the only one of those out there. This is why I suggest you dive in to a series of monitoring blogs on a tool that has the wind in its sails.

Download and Install Prometheus

In this billet, we will lay the foundations, starting with the installation of Prometheus hosted on a UNIX server (Red Hat Enterprise Linux release 8.5).
We will focus specifically on the installation of Prometheus. We will cover the configuration part in another blog.

Download the pre-compiled binaries

First of all, let’s download the latest binaries available on the official website, which is version “2.32.0-rc.0”

[nla@DBI-POC prometheus]$ mkdir -p /share/prometheus; chmod -R 755 share; cd /share/prometheus
[nla@DBI-POC prometheus]$ wget https://github.com/prometheus/prometheus/releases/download/v2.32.0-rc.0/prometheus-2.32.0-rc.0.linux-amd64.tar.gz

--2021-12-06 16:08:04--  https://github.com/prometheus/prometheus/releases/download/v2.32.0-rc.0/prometheus-2.32.0-rc.0.linux-amd64.tar.gz
Resolving github.com (github.com)... 140.82.121.3
Connecting to github.com (github.com)|140.82.121.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/6838921/d4b11ccb-c6e4-4401-9d93-1da7f6716c24?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20211206%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211206T150804Z&X-Amz-Expires=300&X-Amz-Signature=56868964a039ed2a6aa1a75620956a1df24099821adc9b8b9e7a525e20df13c4&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=6838921&response-content-disposition=attachment%3B%20filename%3Dprometheus-2.32.0-rc.0.linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2021-12-06 16:08:04--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/6838921/d4b11ccb-c6e4-4401-9d93-1da7f6716c24?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20211206%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211206T150804Z&X-Amz-Expires=300&X-Amz-Signature=56868964a039ed2a6aa1a75620956a1df24099821adc9b8b9e7a525e20df13c4&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=6838921&response-content-disposition=attachment%3B%20filename%3Dprometheus-2.32.0-rc.0.linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 75092211 (72M) [application/octet-stream]
Saving to: 'prometheus-2.32.0-rc.0.linux-amd64.tar.gz'

prometheus-2.32.0-rc.0.linux-amd64.tar.gz                                                                                  100%[======================================================================================================================================================================================================================================================================================================================================>]  71.61M  9.15MB/s    in 9.4s

2021-12-06 16:08:14 (7.61 MB/s) - 'prometheus-2.32.0-rc.0.linux-amd64.tar.gz' saved [75092211/75092211]
[nla@DBI-POC prometheus]$

Once the binaries are downloaded, let’s uncompress the tarball.

[nla@DBI-POC prometheus]$ tar -xvf prometheus-2.32.0-rc.0.linux-amd64.tar.gz
prometheus-2.32.0-rc.0.linux-amd64/
prometheus-2.32.0-rc.0.linux-amd64/consoles/
prometheus-2.32.0-rc.0.linux-amd64/consoles/index.html.example
prometheus-2.32.0-rc.0.linux-amd64/consoles/node-cpu.html
prometheus-2.32.0-rc.0.linux-amd64/consoles/node-disk.html
prometheus-2.32.0-rc.0.linux-amd64/consoles/node-overview.html
prometheus-2.32.0-rc.0.linux-amd64/consoles/node.html
prometheus-2.32.0-rc.0.linux-amd64/consoles/prometheus-overview.html
prometheus-2.32.0-rc.0.linux-amd64/consoles/prometheus.html
prometheus-2.32.0-rc.0.linux-amd64/console_libraries/
prometheus-2.32.0-rc.0.linux-amd64/console_libraries/menu.lib
prometheus-2.32.0-rc.0.linux-amd64/console_libraries/prom.lib
prometheus-2.32.0-rc.0.linux-amd64/prometheus.yml
prometheus-2.32.0-rc.0.linux-amd64/LICENSE
prometheus-2.32.0-rc.0.linux-amd64/NOTICE
prometheus-2.32.0-rc.0.linux-amd64/prometheus
prometheus-2.32.0-rc.0.linux-amd64/promtool
[nla@DBI-POC prometheus]$

Now, let’s create our system account called prometheus

sudo useradd -M -r -s /bin/false prometheus

We will use mkdir to create some directories to store the configuration and other files needed by Prometheus and store the data.

sudo mkdir /etc/prometheus /data/prometheus

Move the files from the downloaded archive to the appropriate locations, and set ownership to the prometheus system account:

[nla@DBI-POC prometheus]$sudo cp prometheus-2.32.0-rc.0.linux-amd64/{prometheus,promtool} /usr/local/bin/
[nla@DBI-POC prometheus]$sudo chown prometheus:prometheus /usr/local/bin/{prometheus,promtool}
[nla@DBI-POC prometheus]$sudo cp -r prometheus-2.32.0-rc.0.linux-amd64/{consoles,console_libraries} /etc/prometheus/
[nla@DBI-POC prometheus]$sudo cp prometheus-2.32.0-rc.0.linux-amd64/prometheus.yml /etc/prometheus/prometheus.yml
[nla@DBI-POC prometheus]$sudo chown -R prometheus:prometheus /etc/prometheus
[nla@DBI-POC prometheus]$sudo chown prometheus:prometheus /data/prometheus

Half the work is done; let’s run Prometheus configured by default in the foreground just to make sure everything is set up correctly  :

[nla@DBI-POC prometheus]$prometheus --config.file=/etc/prometheus/prometheus.yml

ts=2021-12-06T17:13:06.171Z caller=main.go:406 level=info msg="No time or size retention was set so using the default time retention" duration=15d
ts=2021-12-06T17:13:06.171Z caller=main.go:444 level=info msg="Starting Prometheus" version="(version=2.31.1, branch=HEAD, revision=411021ada9ab41095923b8d2df9365b632fd40c3)"
ts=2021-12-06T17:13:06.172Z caller=main.go:449 level=info build_context="(go=go1.17.3, user=root@9419c9c2d4e0, date=20211105-20:35:02)"
ts=2021-12-06T17:13:06.172Z caller=main.go:450 level=info host_details="(Linux 4.18.0-348.2.1.el8_5.x86_64 #1 SMP Mon Nov 8 13:30:15 EST 2021 x86_64 DBI-POC.localdomain (none))"
ts=2021-12-06T17:13:06.172Z caller=main.go:451 level=info fd_limits="(soft=1024, hard=262144)"
ts=2021-12-06T17:13:06.172Z caller=main.go:452 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2021-12-06T17:13:06.173Z caller=web.go:542 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
ts=2021-12-06T17:13:06.173Z caller=main.go:839 level=info msg="Starting TSDB ..."
ts=2021-12-06T17:13:06.175Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false
ts=2021-12-06T17:13:06.176Z caller=head.go:479 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
ts=2021-12-06T17:13:06.176Z caller=head.go:513 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.628µs
ts=2021-12-06T17:13:06.176Z caller=head.go:519 level=info component=tsdb msg="Replaying WAL, this may take a while"
ts=2021-12-06T17:13:06.177Z caller=head.go:590 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
ts=2021-12-06T17:13:06.177Z caller=head.go:596 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=22.185µs wal_replay_duration=770.195µs total_replay_duration=818.18µs
ts=2021-12-06T17:13:06.177Z caller=main.go:866 level=info fs_type=XFS_SUPER_MAGIC
ts=2021-12-06T17:13:06.178Z caller=main.go:869 level=info msg="TSDB started"
ts=2021-12-06T17:13:06.178Z caller=main.go:996 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
ts=2021-12-06T17:13:06.186Z caller=main.go:1033 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=8.409684ms db_storage=1.003µs remote_storage=1.343µs web_handler=271ns query_engine=700ns scrape=8.118036ms scrape_sd=23.72µs notify=23.188µs notify_sd=8.332µs rules=1.081µs
ts=2021-12-06T17:13:06.186Z caller=main.go:811 level=info msg="Server is ready to receive web requests."

Here, we can see the message saying, “The server is ready to receive web requests, ” confirming that our application is running well.
Send a break to stop the process (Press Ctrl+C).

Configure Prometheus service file

Create a systemd unit file for Prometheus:
[nla@DBI-POC prometheus]$sudo vi /etc/systemd/system/prometheus.service

Define the Prometheus service in the unit file:
[Unit]

Description=Prometheus Time Series Collection and Processing Server
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /data/prometheus/ \
--web.page-title="blog.dbi-services: Prometheus Time Series Collection and Processing Server"
--web.enable-lifecycle= true \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries
[Install]
WantedBy=multi-user.target

After saving and exiting the file, we will ensure that Systemd has considered the changes we have made.
[nla@DBI-POC prometheus]$sudo systemctl daemon-reload

Start the Prometheus service:
[nla@DBI-POC prometheus]$sudo systemctl start prometheus

Think about enable the Prometheus service to allow it starting automatically at boot:
[nla@DBI-POC prometheus]$sudo systemctl enable prometheus

Verify the Prometheus service is healthy:
[nla@DBI-POC prometheus]$ sudo systemctl status prometheus

● prometheus.service - Prometheus Time Series Collection and Processing Server
   Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-12-07 10:02:32 CET; 6s ago
 Main PID: 45138 (prometheus)
    Tasks: 10 (limit: 49348)
   Memory: 92.4M
   CGroup: /system.slice/prometheus.service
           └─45138 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /data/prometheus/ --web.page-title=blog.dbi-services: Prometheus Time Series Collection and Processing Server

Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.100Z caller=compact.go:518 level=info component=tsdb msg="write block" mint=1638806408045 maxt=1638813600000 ulid=01FPA22V00F7PTC3WFWNJYT653 duration=20.355646ms
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.102Z caller=head.go:803 level=info component=tsdb msg="Head GC completed" duration=930.334µs
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.102Z caller=checkpoint.go:97 level=info component=tsdb msg="Creating checkpoint" from_segment=12 to_segment=14 mint=1638813600000
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.111Z caller=head.go:972 level=info component=tsdb msg="WAL checkpoint complete" first=12 last=14 duration=9.387086ms
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.138Z caller=compact.go:459 level=info component=tsdb msg="compact blocks" count=2 mint=1638772668103 maxt=1638784800000 ulid=01FPA22V10HFAYEK62F84PPVZQ sources="[01FP82P13KRRXE2TPZBAVTEQPM 01FP82P1491P5PFA8ANPFFG8NB]" duration=26.366272ms
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.139Z caller=db.go:1293 level=info component=tsdb msg="Deleting obsolete block" block=01FP82P13KRRXE2TPZBAVTEQPM
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.141Z caller=db.go:1293 level=info component=tsdb msg="Deleting obsolete block" block=01FP82P1491P5PFA8ANPFFG8NB
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.164Z caller=compact.go:459 level=info component=tsdb msg="compact blocks" count=2 mint=1638727208042 maxt=1638784800000 ulid=01FPA22V1XK5XXXRPHB3396KR3 sources="[01FP77CXPC9Z3ZJVEFK2P5NPCM 01FPA22V10HFAYEK62F84PPVZQ]" duration=23.315044ms
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.165Z caller=db.go:1293 level=info component=tsdb msg="Deleting obsolete block" block=01FP77CXPC9Z3ZJVEFK2P5NPCM
Dec 07 10:02:38 DBI-POC.localdomain prometheus[45138]: ts=2021-12-07T09:02:38.167Z caller=db.go:1293 level=info component=tsdb msg="Deleting obsolete block" block=01FPA22V10HFAYEK62F84PPVZQ
[nla@DBI-POC prometheus]$

We can see the service is up and running, so good so far.
Make an HTTP request to Prometheus to verify it can respond. Basically, the request should return the page graph is found:

[nla@DBI-POC prometheus]$ curl http://localhost:9090/
Found.

And last but not least, open a new browser to access Prometheus by navigating to http://{YOUR IP ADDRESS}:9090. This one should open up on the Prometheus expression browser like below.

Congratulations, you have successfully installed a Prometheus; I’ll see you in a future billet to talk about the configuration of Prometheus and how we can monitor our system.

 

Cet article Monitoring – first steps with Prometheus est apparu en premier sur Blog dbi services.

Elastic (ELK) Stack – Set up Elasticsearch

$
0
0

After we made a global overview on Elastic Stack and we went more in deep in Elasticsearch terminologies. This third blog in the Elastic Stack series will allow you to know how to set up Elasticsearch, from Downloading, Installing, Configuring, and Starting.

Supported platforms

At the beginning of any installation you have to check the compatibility matrix from operating systems and JVMs, this matrix is available here! Elasticsearch is tested on the listed platforms, but it is possible that it will work on other platforms too.

Host Elasticsearch

It is recommended to run Elasticsearch on a dedicated host or as a primary service. Several Elasticsearch features, such as automatic JVM heap sizing, assume it’s the only resource-intensive application on the host or container.
You can run Elasticsearch on your own hardware or use hosted Elasticsearch Service that is available on AWS, GCP, and Azure!

Download Elasticsearch

As I will install Elasticsearch myself, I need to download it, Elasticsearch is provided in different package formats depending on the OS.
In my case, I will download the latest stable version of tar.gz archives that are available for installation on any Linux distribution and MacOS.
The Linux archive for Elasticsearch v7.16.2 (latest version today) can be downloaded as follows:

[elastic@vmelastic app]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.2-linux-x86_64.tar.gz
--2021-12-28 13:49:54--  https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.2-linux-x86_64.tar.gz
... connected.
Proxy request sent, awaiting response... 200 OK
Length: 343664171 (328M) [application/x-gzip]
Saving to: `elasticsearch-7.16.2-linux-x86_64.tar.gz'

100%[===================================================================================================================================================>] 343,664,171 35.0M/s   in 9.7s

2021-12-28 13:50:04 (33.8 MB/s) - `elasticsearch-7.16.2-linux-x86_64.tar.gz' saved [343664171/343664171]

[elastic@vmelastic app]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.2-linux-x86_64.tar.gz.sha512
--2021-12-28 13:50:04--  https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.16.2-linux-x86_64.tar.gz.sha512
... connected.
Proxy request sent, awaiting response... 200 OK
Length: 171 [binary/octet-stream]
Saving to: `elasticsearch-7.16.2-linux-x86_64.tar.gz.sha512'

100%[===================================================================================================================================================>] 171         --.-K/s   in 0s

2021-12-28 13:50:04 (23.5 MB/s) - `elasticsearch-7.16.2-linux-x86_64.tar.gz.sha512' saved [171/171]

[elastic@vmelastic app]$ shasum -a 512 -c elasticsearch-7.16.2-linux-x86_64.tar.gz.sha512
elasticsearch-7.16.2-linux-x86_64.tar.gz: OK
[elastic@vmelastic app]$

Compares the SHA of the downloaded .tar.gz archive and the published checksum, which should output elasticsearch-{version}-linux-x86_64.tar.gz: OK. as shown above.

Install Elasticsearch

Simply, extract the tar.gz file:

[elastic@vmelastic app]$ tar -xzf elasticsearch-7.16.2-linux-x86_64.tar.gz
[elastic@vmelastic app]$ ls -rtl
total 335624
drwxr-x---. 9 elastic elastic      4096 Dec 18 19:48 elasticsearch-7.16.2
-rw-r-----. 1 elastic elastic 343664171 Dec 19 11:01 elasticsearch-7.16.2-linux-x86_64.tar.gz
-rw-r-----. 1 elastic elastic       171 Dec 19 11:01 elasticsearch-7.16.2-linux-x86_64.tar.gz.sha512
[elastic@vmelastic app]$ cd elasticsearch-7.16.2

This directory, the path to elasticsearch-7.16.2 folder is known as $ES_HOME.

The content of ES_HOME:

[elastic@vmelastic elasticsearch-7.16.2]$ ls -rtl
total 652
-rw-r-----.  1 elastic elastic   2710 Dec 18 19:40 README.asciidoc
-rw-r-----.  1 elastic elastic   3860 Dec 18 19:40 LICENSE.txt
drwxr-x---.  2 elastic elastic   4096 Dec 18 19:45 plugins
drwxr-x---.  2 elastic elastic   4096 Dec 18 19:45 logs
-rw-r-----.  1 elastic elastic 627787 Dec 18 19:45 NOTICE.txt
drwxr-x---.  3 elastic elastic   4096 Dec 18 19:48 lib
drwxr-x---.  2 elastic elastic   4096 Dec 18 19:48 bin
drwxr-x---.  9 elastic elastic   4096 Dec 18 19:48 jdk
drwxr-x---. 61 elastic elastic   4096 Dec 18 19:48 modules
drwxr-x---.  3 elastic elastic   4096 Dec 28 13:58 config

Elasticsearch writes the data you index to indices and data streams to a data directory which will be created, and writes in logs directory its own application logs which contain information about cluster health and operations.
It is highly recommended to set the path.data and path.logs in elasticsearch.yml (see below) to locations outside of $ES_HOME because files in $ES_HOME risk deletion during an upgrade.

Configure Elasticsearch

In fact, Elasticsearch ships with good defaults and requires very little configuration depending on your need. However, the configuration files should contain settings which are node-specific, such as node.name and paths, or settings which a node requires in order to be able to join a cluster, such as cluster.name and network.host.

Basically, you have to know about three configuration files:

  • elasticsearch.yml to configure Elasticsearch
  • jvm.options to configure Elasticsearch JVM settings
  • log4j2.properties to configure Elasticsearch logging

These configuration files format is YAML and are located by default in $ES_HOME/config, you can customize the location using ES_PATH_CONF environment variable which is recommended for the same reason as data and logs directories!
Please note that environment variables referenced like ${VARIABLE} notation within the configuration files will be replaced with the value of the environment variable. This is really helpful 😉

Set the JVM heap size

To override the default heap size, set the minimum and maximum heap size settings, Xms and Xmx.
The minimum and maximum values must be the same, and no more than 50% of your total memory, because Elasticsearch requires memory for purposes other than the JVM heap!

To do so, update the jvm.options in the custom config directory.

[elastic@vmelastic ~]$ vi $ES_PATH_CONF/jvm.options

Add the below two lines according to your environment and your need:

-Xms2g
-Xmx2g

Configure Elasticsearch

To configure elasticsearch, update elasticsearch.yml in the custom config directory:

[elastic@vmelastic ~]$ vi $ES_PATH_CONF/elasticsearch.yml

Cluster name
A node can only join a cluster when it shares its cluster.name with all the other nodes in the cluster. The default name is elasticsearch, but you should change it to an appropriate name which describes the purpose of the cluster, and of course if you run more then one cluster.

cluster.name: elasticsearch-logging

Node name
It is worth configuring a more meaningful name of a node, which will also have the advantage of persisting after restarting the node:

node.name: Master1

If you have dedicated host per node, it make sense to set to the server’s HOSTNAME as follows:

node.name: ${HOSTNAME}

Data and Log path
If you are using the .zip or .tar.gz archives, the data and logs directories are sub-folders of $ES_HOME as we saw above. Please be careful, if these important folders are left in their default locations, there is a high risk of them being deleted while upgrading Elasticsearch to a new version (I know I repeat it 😉 )

So, to change the locations of the data and log folder:

path.logs: /data/log/elasticsearch
path.data: /data/elasticsearch

Network Host
By default, Elasticsearch binds to loopback addresses only (127.0.0.1 and [::1]). This is maybe sufficient to run a single development node on a server, but not to build a cluster with multiple nodes.

I recommend you to set this value in all you installations:

network.host: XXX.XXX.X.XX

On the other hand, be aware that Elasticsearch assumes that you are moving from development mode to production mode, and upgrades a number of system startup checks from warnings to exceptions when you set the network.host!

Discovery
Provides a list of the addresses of the master-eligible nodes in the cluster. Each address has the format host:port or host. If the port is not given then it is determined by checking the following settings in order:

transport.profiles.default.port
transport.port

If neither of these is set then the default port is 9300. The default value for discovery.seed_hosts is:

discovery.seed_hosts: ["127.0.0.1", "[::1]"]

In our case, as we set only one node, we will specify it as master-eligible:

discovery.seed_hosts: ["XXX.XXX.X.X"]

Now, we are OK with the configuration, we can start Elasticsearch.

Start Elasticsearch

As said before, it is a best practice to change config, data, and logs directories location.

Define ES_PATH_CONF and then Elasticsearch can be started from the command line as follows:

ES_PATH_CONF=/path/to/config $ES_HOME/bin/elasticsearch

Once Elasticsearch started, you can check it:

curl -X GET "XXX.XXX.X.XX:9200/?pretty"

With X-Pack security enabled, you will need to set username and password in the curl:

curl -u username:password -X GET "XXX.XXX.X.XX:9200/?pretty"

The response should be like:

{
  "name" : "Master1",
  "cluster_name" : "elasticsearch-logging",
  "cluster_uuid" : "7rniLCvFRIGrsDqzJsoo6A",
  "version" : {
    "number" : "7.16.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "65f6e357953a5bc21073d89aa29",
    "build_date" : "2021-28-12T12:55:29.143308416Z",
    "build_snapshot" : false,
    "lucene_version" : "8.10.1",
    "minimum_wire_compatibility_version" : "1.2.3",
    "minimum_index_compatibility_version" : "1.2.3"
  },
  "tagline" : "You Know, for Search"
}

Elasticsearch has been downloaded, configured and started successfully, I hope that this blog helps you to begin with Elasticsearch. Don’t hesitate to ask questions I will try to reply as soon as possible 🙂

Cet article Elastic (ELK) Stack – Set up Elasticsearch est apparu en premier sur Blog dbi services.

Updating Password in PostgreSQL from md5 to scram-sha-256

$
0
0

Many installations have a history of many major PostgreSQL releases.
With PostgreSQL 10 comes scram-sha-256 for hashing passwords, by installing from packages scram-sha-256 is the default setting for new installations since PostgreSQL 13.

With this small blog I will describe how to update password from md5 to scram-sha-256.
For the installation of PostgreSQL there are many blogs at dbi or articles from dbi.

Blog at dbi-services.com
Article at heise.de

So I will not repeat this steps.

The passwords in PostgreSQL are sotred in the table pg_authid and for this blog i have a user test.

$ postgres=# select rolpassword from pg_authid where rolname = 'test';
$              rolpassword
$ -------------------------------------
$  md56e4b266b2a0fbaa2c08d61bdefe7ee48
$ (1 row)
$ 
$ postgres=#

Visible is that the password is hashed in md5.

Mostly for ip ranges, sometimes also for specified users there is a corosponding entry within pg_hba.conf.

$ # TYPE  DATABASE        USER            ADDRESS                 METHOD
$ host    all             test            127.0.0.1/32            md5

At first step we need to switch the parameter password_encryption from md5 to scram-sha-256.

$ postgres=# alter system set password_encryption = 'scram-sha-256';
$ ALTER SYSTEM
$ postgres=#

And activate this change.

$ postgres=# select pg_reload_conf();
$  pg_reload_conf
$ ----------------
$  t
$ (1 row)
$ 
$ postgres=#

But keep the line within pg_hba till all affected passwords are updated.

Updating the passwords to change from md5 to scram-sha-256.

$ postgres=# alter role test with password 'password';
$ ALTER ROLE
$ postgres=#

Check the new hashing.

$ postgres=# select rolpassword from pg_authid where rolname = 'test';
$                                                               rolpassword
$ ---------------------------------------------------------------------------------------------------------------------------------------
$  SCRAM-SHA-256$4096:OSi8R7U5YM0ejUq982OX/g==$82TTXF0cnuq5puyN1mnpTFsSlkLFPDbP7+3TdxtX0B4=:QCRU75g5bDKONib5s9hwsjJsweeiswkyMBFUG0IF1Ts=
$ (1 row)
$ 
$ postgres=#

Now change the entry for this user within pg_hba.conf

$ # TYPE  DATABASE        USER            ADDRESS                 METHOD
$ host    all             test            127.0.0.1/32            scram-sha-256

Reload the configuration of PostgreSQL again.

$ postgres=# select pg_reload_conf();
$  pg_reload_conf
$ ----------------
$  t
$ (1 row)
$ 
$ postgres=#

Now the password encryption change is completed from md5 to scram-sha-256.

In many cases old systems where migrated to a complete new environment, new OS, latest PostgreSQL verion, by using pg_dump and pg_restore.
In this cases all users can be migrated to the new environment by using pg_dumpall -r > users.sql or pg_dumpall –roles-only > users.sql.
These files can be imported with psql -f users.sql, there will be a error messages that postgres user exits but this can be ignored.
All these imported users will still have the md5 hashed passwords, so it make sense to update these user passwords directly afterwards.

Cet article Updating Password in PostgreSQL from md5 to scram-sha-256 est apparu en premier sur Blog dbi services.

How to setup a Consul Cluster on RHEL 8, Rocky Linux 8, AlmaLinux 8

$
0
0

This blog describes the setup of a Consul Cluster on RHEL 8 and clones, it will be the base for a Patroni HA setup using RPM Packages from postgresql.org.
Many Patroni setups are using ETCD, but ETCD is not available as RPM out of the box for RHEL 8 and clones, and in many cases using tar files or RPMS from unknown sources are not allowed.

I use OS Rocky Linux 8.5 minimal installation patched before writing this blog.

[root@patroni-01 ~]# cat /etc/os-release
NAME="Rocky Linux"
VERSION="8.5 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.5"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.5 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"
[root@patroni-01 ~]#

It will be a three node cluster with the following nodes:

[root@patroni-01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.198.132 patroni-01.patroni.test patroni-01
192.168.198.133 patroni-02.patroni.test patroni-02
192.168.198.134 patroni-03.patroni.test patroni-03
[root@patroni-01 ~]#

The installation RPM for Consul will come from the postgresql.org repository, so we need to disable postgresql from the OS Repository on all three nodes.

$ [root@patroni-01 ~]# dnf -y module disable postgresql
$ Last metadata expiration check: 1:21:07 ago on Fri 04 Mar 2022 12:53:22 PM CET.
$ Dependencies resolved.
$ ====================================================================================================
$  Package                Architecture          Version                  Repository              Size
$ ====================================================================================================
$ Disabling modules:
$  postgresql
$ 
$ Transaction Summary
$ ====================================================================================================
$ 
$ Complete!
$ [root@patroni-01 ~]#

Next step is adding the postgresql.org repository.

$ [root@patroni-01 ~]# dnf install https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
$ Last metadata expiration check: 1:49:40 ago on Fri 04 Mar 2022 12:53:22 PM CET.
$ pgdg-redhat-repo-latest.noarch.rpm                                                                         13 kB/s |  12 kB     00:00
$ Dependencies resolved.
$ ==========================================================================================================================================
$  Package                               Architecture                Version                        Repository                         Size
$ ==========================================================================================================================================
$ Installing:
$  pgdg-redhat-repo                      noarch                      42.0-23                        @commandline                       12 k
$ 
$ Transaction Summary
$ ==========================================================================================================================================
$ Install  1 Package
$ 
$ Total size: 12 k
$ Installed size: 12 k
$ Is this ok [y/N]: y
$ Downloading Packages:
$ Running transaction check
$ Transaction check succeeded.
$ Running transaction test
$ Transaction test succeeded.
$ Running transaction
$   Preparing        :                                                                                                                  1/1
$   Installing       : pgdg-redhat-repo-42.0-23.noarch                                                                                  1/1
$   Verifying        : pgdg-redhat-repo-42.0-23.noarch                                                                                  1/1
$ 
$ Installed:
$   pgdg-redhat-repo-42.0-23.noarch
$ 
$ Complete!
$ [root@patroni-01 ~]#

As written in the beginning, Consul will be part of a Patroni based HA Cluster, so I install all needed packages.
But first I edit the pgdg repo file to enable PostgreSQL 14 only and disable all other versions.

[root@patroni-01 ~]# cat /etc/yum.repos.d/pgdg-redhat-all.repo
#######################################################
# PGDG Red Hat Enterprise Linux / CentOS repositories #
#######################################################

# PGDG Red Hat Enterprise Linux / CentOS stable common repository for all PostgreSQL versions

[pgdg-common]
name=PostgreSQL common RPMs for RHEL/CentOS $releasever - $basearch
baseurl=https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
repo_gpgcheck = 1

# Red Hat recently breaks compatibility between 8.n and 8.n+1. PGDG repo is
# affected with the LLVM repo. This is a band aid repo for the llvmjit users
# whose installations cannot be updated.

[pgdg-centos8-sysupdates]
name=PostgreSQL Supplementary ucommon RPMs for RHEL/CentOS $releasever - $basearch
baseurl=https://download.postgresql.org/pub/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-$releasever-$basearch
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
repo_gpgcheck = 1

# PGDG Red Hat Enterprise Linux / CentOS stable repositories:

[pgdg14]
name=PostgreSQL 14 for RHEL/CentOS $releasever - $basearch
baseurl=https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-$releasever-$basearch
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
repo_gpgcheck = 1

[pgdg13]
name=PostgreSQL 13 for RHEL/CentOS $releasever - $basearch
baseurl=https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
repo_gpgcheck = 1

Set enabled to 0 for all other version than the one you want to install, in my case only 14 is enabled.

As preparation for the following operations we need to open ports with firewalld, port 5432 is default PostgreSQL, the others used by consul.

$ [root@patroni-01 ~]# firewall-cmd --add-port={5432,8300,8301,8302,8400,8500,8600}/tcp --permanent
$ success
$ [root@patroni-01 ~]# firewall-cmd --add-port={8301,8302,8600}/udp --permanent
$ success
$ [root@patroni-01 ~]# firewall-cmd --reload
$ success
$ [root@patroni-01 ~]#
$ 

Now it is time to install Consul, as written in the beginning it will be part of a Patroni Cluster, so I install all needed packages in one step.

$ [root@patroni-01 ~]# dnf install consul postgresql14 postgresql14-server postgresql14-contrib haproxy keepalived patroni
$ PostgreSQL common RPMs for RHEL/CentOS 8 - x86_64                                           83  B/s | 195  B     00:02
$ PostgreSQL common RPMs for RHEL/CentOS 8 - x86_64                                          1.6 MB/s | 1.7 kB     00:00
$ Importing GPG key 0x442DF0F8:
$  Userid     : "PostgreSQL RPM Building Project "
$  Fingerprint: 68C9 E2B9 1A37 D136 FE74 D176 1F16 D2E1 442D F0F8
$  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
$ Is this ok [y/N]: y
$ PostgreSQL common RPMs for RHEL/CentOS 8 - x86_64                                          186 kB/s | 619 kB     00:03
$ PostgreSQL 14 for RHEL/CentOS 8 - x86_64                                                   129  B/s | 195  B     00:01
$ PostgreSQL 14 for RHEL/CentOS 8 - x86_64                                                   1.6 MB/s | 1.7 kB     00:00
$ Importing GPG key 0x442DF0F8:
$  Userid     : "PostgreSQL RPM Building Project "
$  Fingerprint: 68C9 E2B9 1A37 D136 FE74 D176 1F16 D2E1 442D F0F8
$  From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
$ Is this ok [y/N]: y
$ PostgreSQL 14 for RHEL/CentOS 8 - x86_64                                                    74 kB/s | 206 kB     00:02
$ Dependencies resolved.
$ ===========================================================================================================================
$  Package                           Architecture  Version                                          Repository          Size
$ ===========================================================================================================================
$ Installing:
$  consul                            x86_64        1.10.3-1.rhel8                                   pgdg-common         16 M
$  haproxy                           x86_64        1.8.27-2.el8                                     appstream          1.4 M
$  keepalived                        x86_64        2.1.5-6.el8                                      appstream          535 k
$  patroni                           x86_64        2.1.3-1.rhel8                                    pgdg-common        863 k
$  postgresql14                      x86_64        14.2-1PGDG.rhel8                                 pgdg14             1.5 M
$  postgresql14-contrib              x86_64        14.2-1PGDG.rhel8                                 pgdg14             723 k
$  postgresql14-server               x86_64        14.2-1PGDG.rhel8                                 pgdg14             5.7 M
$ Installing dependencies:
$  libicu                            x86_64        60.3-2.el8_1                                     baseos             8.8 M
$  lm_sensors-libs                   x86_64        3.4.0-23.20180522git70f7e08.el8                  baseos              58 k
$  lz4                               x86_64        1.8.3-3.el8_4                                    baseos             102 k
$  mariadb-connector-c               x86_64        3.1.11-2.el8_3                                   appstream          199 k
$  mariadb-connector-c-config        noarch        3.1.11-2.el8_3                                   appstream           14 k
$  net-snmp-agent-libs               x86_64        1:5.8-22.el8                                     appstream          747 k
$  net-snmp-libs                     x86_64        1:5.8-22.el8                                     baseos             826 k
$  perl-Carp                         noarch        1.42-396.el8                                     baseos              29 k
$  perl-Data-Dumper                  x86_64        2.167-399.el8                                    baseos              57 k
$  perl-Digest                       noarch        1.17-395.el8                                     appstream           26 k
$  perl-Digest-MD5                   x86_64        2.55-396.el8                                     appstream           36 k
$  perl-Encode                       x86_64        4:2.97-3.el8                                     baseos             1.5 M
$  perl-Errno                        x86_64        1.28-420.el8                                     baseos              75 k
$  perl-Exporter                     noarch        5.72-396.el8                                     baseos              33 k
$  perl-File-Path                    noarch        2.15-2.el8                                       baseos              37 k
$  perl-File-Temp                    noarch        0.230.600-1.el8                                  baseos              62 k
$  perl-Getopt-Long                  noarch        1:2.50-4.el8                                     baseos              62 k
$  perl-HTTP-Tiny                    noarch        0.074-1.el8                                      baseos              57 k
$  perl-IO                           x86_64        1.38-420.el8                                     baseos             141 k
$  perl-MIME-Base64                  x86_64        3.15-396.el8                                     baseos              30 k
$  perl-Net-SSLeay                   x86_64        1.88-1.module+el8.4.0+512+d4f0fc54               appstream          378 k
$  perl-PathTools                    x86_64        3.74-1.el8                                       baseos              89 k
$  perl-Pod-Escapes                  noarch        1:1.07-395.el8                                   baseos              19 k
$  perl-Pod-Perldoc                  noarch        3.28-396.el8                                     baseos              85 k
$  perl-Pod-Simple                   noarch        1:3.35-395.el8                                   baseos             212 k
$  perl-Pod-Usage                    noarch        4:1.69-395.el8                                   baseos              33 k
$  perl-Scalar-List-Utils            x86_64        3:1.49-2.el8                                     baseos              67 k
$  perl-Socket                       x86_64        4:2.027-3.el8                                    baseos              58 k
$  perl-Storable                     x86_64        1:3.11-3.el8                                     baseos              97 k
$  perl-Term-ANSIColor               noarch        4.06-396.el8                                     baseos              45 k
$  perl-Term-Cap                     noarch        1.17-395.el8                                     baseos              22 k
$  perl-Text-ParseWords              noarch        3.30-395.el8                                     baseos              17 k
$  perl-Text-Tabs+Wrap               noarch        2013.0523-395.el8                                baseos              23 k
$  perl-Time-Local                   noarch        1:1.280-1.el8                                    baseos              32 k
$  perl-URI                          noarch        1.73-3.el8                                       appstream          115 k
$  perl-Unicode-Normalize            x86_64        1.25-396.el8                                     baseos              81 k
$  perl-constant                     noarch        1.33-396.el8                                     baseos              24 k
$  perl-interpreter                  x86_64        4:5.26.3-420.el8                                 baseos             6.3 M
$  perl-libnet                       noarch        3.11-3.el8                                       appstream          120 k
$  perl-libs                         x86_64        4:5.26.3-420.el8                                 baseos             1.6 M
$  perl-macros                       x86_64        4:5.26.3-420.el8                                 baseos              71 k
$  perl-parent                       noarch        1:0.237-1.el8                                    baseos              19 k
$  perl-podlators                    noarch        4.11-1.el8                                       baseos             117 k
$  perl-threads                      x86_64        1:2.21-2.el8                                     baseos              60 k
$  perl-threads-shared               x86_64        1.58-2.el8                                       baseos              47 k
$  postgresql14-libs                 x86_64        14.2-1PGDG.rhel8                                 pgdg14             275 k
$  python3-cdiff                     noarch        1.0-1.rhel8                                      pgdg-common         30 k
$  python3-click                     noarch        6.7-8.el8                                        appstream          130 k
$  python3-pip                       noarch        9.0.3-20.el8.rocky.0                             appstream           19 k
$  python3-prettytable               noarch        0.7.2-14.el8                                     appstream           43 k
$  python3-psutil                    x86_64        5.4.3-11.el8                                     appstream          372 k
$  python3-psycopg2                  x86_64        2.8.6-1.rhel8                                    pgdg-common        178 k
$  python3-pyyaml                    x86_64        3.12-12.el8                                      baseos             192 k
$  python3-setuptools                noarch        39.2.0-6.el8                                     baseos             162 k
$  python3-ydiff                     noarch        1.2-10.rhel8                                     pgdg-common         30 k
$  python36                          x86_64        3.6.8-38.module+el8.5.0+671+195e4563             appstream           18 k
$ Installing weak dependencies:
$  perl-IO-Socket-IP                 noarch        0.39-5.el8                                       appstream           46 k
$  perl-IO-Socket-SSL                noarch        2.066-4.module+el8.4.0+512+d4f0fc54              appstream          297 k
$  perl-Mozilla-CA                   noarch        20160104-7.module+el8.4.0+529+e3b3e624           appstream           14 k
$ Enabling module streams:
$  perl                                            5.26
$  perl-IO-Socket-SSL                              2.066
$  perl-libwww-perl                                6.34
$  python36                                        3.6
$ 
$ Transaction Summary
$ ===========================================================================================================================
$ Install  66 Packages
$ 
$ Total download size: 51 M
$ Installed size: 203 M
$ Is this ok [y/N]:

With installation out of the postgresql.org repository also user postgres and group postgres are created.

I want Consul also to run as postgres user, for this we need to adapt User and Group within the service file.
The service file is located at /usr/lib/systemd/system/consul.service.

[root@patroni-01 ~]# cat /usr/lib/systemd/system/consul.service
[Unit]
Description=Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable.
Documentation=http://www.consul.io
After=network-online.target
Wants=network-online.target

[Service]
User=postgres
Group=postgres
EnvironmentFile=-/etc/sysconfig/consul
ExecStart=/usr/bin/consul $CMD_OPTS
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT
Restart=on-failure

[Install]
WantedBy=multi-user.target
[root@patroni-01 ~]#

Next step is adapting the Consul environment file, I want the Consul data directory within /pgdata.
The environment file is located at /etc/sysconfig/consul.

[root@patroni-01 ~]# cat /etc/sysconfig/consul
CMD_OPTS="agent -config-dir=/etc/consul.d -data-dir=/pgdata/consul"
#GOMAXPROCS=4
[root@patroni-01 ~]#

Creating Consul data directory.

$ [root@patroni-01 ~]# mkdir /pgdata/consul
$ [root@patroni-01 ~]# chown -R postgres:postgres /pgdata/

And the Consul key.

$ [root@patroni-01 ~]# consul keygen
$ 5mSUIrSSXp+usVR1qqM68CD2lnFLaTcg4G48l9zJhqE=
$ [root@patroni-01 ~]#

Now it is time to adapt the Consul configuration file on each node.
Node patroni-01:

[root@patroni-01 ~]# cat /etc/consul.d/consul.json-dist.hcl
{
    "server": true,
    "data_dir": "/pgdata/consul",
    "log_level": "INFO"
    "disable_update_check": true,
    "disable_anonymous_signature": true,
    "advertise_addr": "192.168.198.132",
    "bind_addr": "192.168.198.132",
    "bootstrap_expect": 3,
    "client_addr": "0.0.0.0",
    "domain": "patroni.test",
    "enable_script_checks": true,
    "dns_config": {
        "enable_truncate": true,
        "only_passing": true
    },
    "enable_syslog": true,
    "encrypt": "5mSUIrSSXp+usVR1qqM68CD2lnFLaTcg4G48l9zJhqE=",
    "leave_on_terminate": true,
    "log_level": "INFO",
    "rejoin_after_leave": true,
    "retry_join": [
        "patroni-01",
        "patroni-02",
        "patroni-03"
    ],
    "server": true,
    "start_join": [
        "patroni-01",
        "patroni-02",
        "patroni-03"
    ],
    "ui_config.enabled": true
}
[root@patroni-01 ~]#

Node patroni-02:

[root@patroni-02 ~]# cat /etc/consul.d/consul.json-dist.hcl
{
    "server": true,
    "data_dir": "/pgdata/consul",
    "log_level": "INFO"
    "disable_update_check": true,
    "disable_anonymous_signature": true,
    "advertise_addr": "192.168.198.133",
    "bind_addr": "192.168.198.133",
    "bootstrap_expect": 3,
    "client_addr": "0.0.0.0",
    "domain": "patroni.test",
    "enable_script_checks": true,
    "dns_config": {
        "enable_truncate": true,
        "only_passing": true
    },
    "enable_syslog": true,
    "encrypt": "5mSUIrSSXp+usVR1qqM68CD2lnFLaTcg4G48l9zJhqE=",
    "leave_on_terminate": true,
    "log_level": "INFO",
    "rejoin_after_leave": true,
    "retry_join": [
        "patroni-01",
        "patroni-02",
        "patroni-03"
    ],
    "server": true,
    "start_join": [
        "patroni-01",
        "patroni-02",
        "patroni-03"
    ],
    "ui_config.enabled": true
}
[root@patroni-02 ~]#

Node patroni-03:

[root@patroni-03 ~]# cat /etc/consul.d/consul.json-dist.hcl
{
    "server": true,
    "data_dir": "/pgdata/consul",
    "log_level": "INFO"
    "disable_update_check": true,
    "disable_anonymous_signature": true,
    "advertise_addr": "192.168.198.134",
    "bind_addr": "192.168.198.134",
    "bootstrap_expect": 3,
    "client_addr": "0.0.0.0",
    "domain": "patroni.test",
    "enable_script_checks": true,
    "dns_config": {
        "enable_truncate": true,
        "only_passing": true
    },
    "enable_syslog": true,
    "encrypt": "5mSUIrSSXp+usVR1qqM68CD2lnFLaTcg4G48l9zJhqE=",
    "leave_on_terminate": true,
    "log_level": "INFO",
    "rejoin_after_leave": true,
    "retry_join": [
        "patroni-01",
        "patroni-02",
        "patroni-03"
    ],
    "server": true,
    "start_join": [
        "patroni-01",
        "patroni-02",
        "patroni-03"
    ],
    "ui_config.enabled": true
}
[root@patroni-03 ~]#

And make the files accessable for the postgres user.

$ [root@patroni-01 ~]# chown -R postgres:postgres /etc/consul.d/

Now it is time to start Consul on each node.

$ [root@patroni-01 ~]# systemctl start consul

Checking the status.

[root@patroni-01 ~]# consul members
$ Node                     Address               Status  Type    Build   Protocol  DC   Segment
$ patroni-01.patroni.test  192.168.198.132:8301  alive   server  1.10.3  2         dc1  
$ patroni-02.patroni.test  192.168.198.133:8301  alive   server  1.10.3  2         dc1  
$ patroni-03.patroni.test  192.168.198.134:8301  alive   server  1.10.3  2         dc1  
$ [root@patroni-01 ~]#

Sometimes it happens that Consul autojoin has issues, in this case manual join helps.

$ [root@patroni-01 ~]# consul join 192.168.198.132 192.168.198.133 192.168.198.134
$ Successfully joined cluster by contacting 3 nodes.
$ [root@patroni-01 ~]# consul members
$ Node                     Address               Status  Type    Build   Protocol  DC   Segment
$ patroni-01.patroni.test  192.168.198.132:8301  alive   server  1.10.3  2         dc1  
$ patroni-02.patroni.test  192.168.198.133:8301  alive   server  1.10.3  2         dc1  
$ patroni-03.patroni.test  192.168.198.134:8301  alive   server  1.10.3  2         dc1  
$ [root@patroni-01 ~]#

That was the first part of a Patroni HA Setup using Consul instead of ETCD.

Cet article How to setup a Consul Cluster on RHEL 8, Rocky Linux 8, AlmaLinux 8 est apparu en premier sur Blog dbi services.

Viewing all 52 articles
Browse latest View live