Translate

Thursday, November 30, 2023

join rhel to AD using sssd


1. Install proper packages

# yum install sssd realmd oddjob oddjob-mkhomedir adcli samba-common samba-common-tools krb5-workstation openldap-clients policycoreutils-python

2. Now that all packages have been installed, the first thing to do is to join the rhel system to the Active Directory domain.

# realm join --user=[your AD user] [Your domain name]

3. Verify the domain status

# realm list

4.So now that the Linux server is part of the AD domain, domain users can access the server with their usual credentials. We can configure sssd.conf to allow specific user group to login to this system

[domain/example.com]

ad_domain = example.com

ad_enabled_domains = example.com

ad_server = to8pdc01.example.com

ad_backup_server = to8pdc02.example.com

dns_discovery_domain = example.com

fallback_homedir = /home/%u

ldap_id_mapping = True

id_provider = ad

auth_provider = ad

access_provider = ad

chpass_provider = ad

use_fully_qualified_names = False

realmd_tags = manages-system joined-with-samba

ad_enable_gc = True

ad_gpo_default_right = permit

dyndns_update = False

ad_gpo_access_control = permissive 

krb5_server = to8pdc01.example.com

krb5_realm = EXAMPLE.COM

cache_credentials = True

krb5_store_password_if_offline = True 

ldap_user_ssh_public_key = altSecurityIdentities

debug_level = 0 

ad_access_filter = (|(&(objectClass=user)(memberOf=CN=gad_unix,OU=managed_groups,OU=groups,OU=symcor inc,DC=symprod,DC=com)(unixHomeDirectory=*)))

5. restart sssd

# systemctl restart sssd

 




run gitlab-ee with AD user

 1. Install gitlab-ee

1.1 Install dependency

sudo yum install -y curl policycoreutils-python openssh-server perl
# Enable OpenSSH server daemon if not enabled: sudo systemctl status sshd
sudo systemctl enable sshd
sudo systemctl start sshd
# Check if opening the firewall is needed with: sudo systemctl status firewalld
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo systemctl reload firewalld
sudo yum install postfix
sudo systemctl enable postfix
sudo systemctl start postfix
1.2 Add the GitLab package repository and install the package
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh | sudo bash
sudo EXTERNAL_URL="https://gitlab.yourdomain.com" yum install -y gitlab-ee
# List available versions: sudo yum --showduplicate list
# Specify version: sudo yum linstall gitlab-ee-16.1.4-ee.0.el7.x86_64
# Pin the version to limit auto-updates: yum versionlock gitlab-ee*
2. Change gitlab user by edit /etc/gitlab/gitlab.rb
user['username'] = "git"
 user['group'] = "grp_git"
 user['uid'] = 1372500825
 user['gid'] = 1372500825
3. do a reconfigure
#gitlab-ctl reconfigure

Wednesday, November 29, 2023

gitlab hashed project failed after migration

 

1.     gitlab-rails console

2.     Find the project by name: p = Project.find_by_name("<project-name>")

3.     Confirm that the repository is in fact read-only: p.repository_read_only

4.     Unset the repository_read_only flag: p.update!(repository_read_only:nil)

5.     Retry the corresponding Sidekiq job in the Admin Area 

6.     Rinse and repeat from step 2

or you can doing this with a script:

create a file named fix.rb

add these to content in 

# Find all projects that GitLab thinks is in legacy storage

Project.without_storage_feature(:repository).find_each(batch_size: 50) do |p|

        # Clear read only

        p.update!(repository_read_only:nil)

end

Then run it:

gitlab-rails runner ./fix.rb

note: if you have more than 50 project has problem, you can rerun this script untill everything is finished.

use Rail console to reset gitlab root password

 1. Open a rails console: 

# gitlab-rails console

2. Find the user:

user = User.find_by_username 'root'

3. Reset the root password

newpass = 'secretpass'

user.password = newpass

user.password_confirmation = newpass

4. save the changes

user.save!

5. exit console

exit


Thursday, November 16, 2023

 

Set up masquerading  in AIX 7.2 TL 5 and later



To enable the masquerading, it is necessary to build a submit.cf implementing this feature with the following procedure:

 

cd /usr/samples/tcpip/sendmail/cf

cp submit.mc submit-masquerade.mc

vi submit-masquerade.mc

At the end of the file, the following line needs to be added:

 

MASQUERADE_AS(`masquerade.domain.com')

The special syntax of the single quotation marks needs to be observed. After the file is saved, it can be compiled:

m4 ../m4/cf.m4 submit-masquerade.mc > /tmp/submit.cf

If there are no errors, the submit.cf file can be installed:

cd /etc/mail

mv submit.cf submit.cf.org

cp /tmp/submit.cf .

 AIX manually install package

1. copy the bvv.z files to a directory /tmp/bvv

# cd /tmp/bvv

#uncompress *

#inutoc .

#installp -d . -L       ---list all the packages inside the /tmp/bvv

# installp -d . fileset-name         ---fileset-name we get from previous step


Friday, November 10, 2023

AIX nim client not able connect to master

From NIM master

  nim -o showlog clientname

 which get access denied error 

 # nim -Fo reset clientname

  # nim -o deallocate -a subclass=all clientname

# nim -o remove  clientname

Now go to client 

# rm /etc/niminfo*

 redefine the client but from client side

 # smitty nim 

or use command line

 # niminit -a name= -a master= -a connect=nimsh

replace rootvg disk on AIX wpar

1. Configure and add hdisk1 on global

. # cfgmgr 

 2. Add hdisk1 to WPAR as below from global

. # chwpar -D devname=hdisk1 rootvg=yes wparname 

3. Clogin WPAR to migrate the physical volume:

 # clogin wparname

 # cfgmgr - to configure the new drive (e.g., hdiskX is the new drive (i.e., hdisk1 on global) and hdiskY (i.e., hdisk2 on global) is the old drive with rootvg) 

# extendvg rootvg hdiskX

 # mirrorvg -S rootvg hdiskX

 # unmirrorvg rootvg hdiskY 

 # reducevg rootvg hdiskY 

 # rmdev -dl hdiskY 

# exit (go back to global env) 

4. You can remove the old hdisk2 from global as below.

 # chwpar -K -D devname=hdisk2 wparname 

# cfgmgr -v

Migration aix 7.1 to 7.2 using nimadm

NIMADM takes a copy of client server rootvg to spare disk(hdisk1) and simultaneously migrate the server to new OS version 

 Advantage: 

Minimum downtime: Once OS migration completed, downtime required for reboot the client to take into new OS version(AIX7.2). 

 Easy to Rollback: If applications/Databases are not running as easier, it very simple to rollback to old level. Just change the bootlist and reboot client server. NIMADM fails in middle whatever the changes are made in altinst_rootvg. It won't impact to original rootvg.

 Pre-check on NIM MASTER: 

1. Make sure the NIM master is on aix 7200-05 

oslevel -s 

2. Create a lpp_source

 nim -o define -t lpp_source -a server=master -a source=/export/AIX_ISO/AIX_V7.2_Install_7200-05_DVD1of2.iso -a packages=all -a location=/export/lpp_source/lpp_7200-05 lpp725 

 or via smitty: smitty nim_mkres 

3. Create the SPOT: 

 nim -o define -t lpp_source -a server=master -a source=lpp725 -a location=/export/spot/spot-725 spot-725 or via smitty: smitty nim_mkres 

4. Check if the nim client is defined

  lsnim |grep “client_name” if not create the nim client with smitty: smit nim_mkmac 

5. bos.alt_disk_install.rte fileset must be installed on NIM master and also in the SPOT.If not you have to add the fileset to SPOT

# lslpp -l bos.alt_disk_install.rte 

 # nim -o showres lpp725 | grep -i bos.alt_disk_install.rte (Check on lpp_source) 

# nim -o showres spot-725 | grep -i bos.alt_disk_install.rte (Check on SPOT) 

If the fileset is missing, add the fileset to the lpp

 # nim -o cust -a filesets=bos.alt_disk_install.rte -a lpp_source=lpp725 spot-725 (add fileset to SPOT)

 6. Check if there is a spare vg nimadmvg, which can be used to create the client rootvg. If not, ask storage team add a new one and create a empty vg 

7. Make sure the nimsh connection to the client is good and or rsh service is opened. Check /etc/inetd.conf and make sure shell/kshell, login/klogin/,exec service is not comment out, if any of these server is comment out, change it and restart the the service by 

# refresh -s inetd # lssrc -s nimsh # nim -o showlog nimclientname 

Pre-check on NIM client  

1.Ask the application team to precheck the compatibility for all the applications should be good with aix 7.2 

2.Take a mksysb backup for the client

3. Check rootvg lv names must be less than 11 characters

# lsvg -l rootvg 

4.Verify the /etc/niminfo file. NIM master details must updated on this file. If not run smitty: smit nim

 5. Check if there is a spare disk for the alt rootvg 

 6. Run the pre_migration script /usr/lpp/bos/pre_migration 

7. Commit all applied fileset 

# installp -c all Run NIMADM 

on nim master to perform the upgrade 

# .nimadm -j nimadmvg -c clientname -s spot-725 -l lpp725 -d hdisk1 -Y 

The hdisks is the spare disk name on the client, if you already have an alt_rootvg disk there, you can delete that alt_rootvg and use it as the spare disk Once migration starts cache file systems are created in nimadmvg, the migration process will take in 12 phases. Post migration: After the NIMADM completed, check the bootlist on the client 

#Bootlist -m normal -o Which should set to the hdisk1(altinst_rootvg)

 Reboot the server