Security Bulletin
Summary
Kernel is used by IBM Netezza Host Management. This bulletin provides mitigation for the reported CVEs.
Vulnerability Details
CVEID: CVE-2020-11609
DESCRIPTION: Linux Kernel is vulnerable to a denial of service, caused by a NULL pointer dereference in the stv06xx subsystem in stv06xx.c and stv06xx_pb0100.c. By sending a specially-crafted request, a local attacker could exploit this vulnerability to cause a denial of service condition.
CVSS Base score: 6.2
CVSS Temporal Score: See: https://exchange.xforce.ibmcloud.com/vulnerabilities/179233 for the current score.
CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H)
CVEID: CVE-2020-11608
DESCRIPTION: Linux Kernel is vulnerable to a denial of service, caused by a NULL pointer dereference in ov511_mode_init_regs and ov518_mode_init_regs. By sending a specially-crafted request, a local attacker could exploit this vulnerability to cause a denial of service condition.
CVSS Base score: 6.2
CVSS Temporal Score: See: https://exchange.xforce.ibmcloud.com/vulnerabilities/179232 for the current score.
CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H)
Affected Products and Versions
Affected Product(s) | Version(s) |
IBM Netezza Host Management | All IBM Netezza Host Management Versions |
Remediation/Fixes
None
Workarounds and Mitigations
Mitigation of the reported CVEs : CVE-2020-11608, CVE-2020-11609, blocklisting kernel module gspca_ov519 and gspca_stv06xx to prevent them from loading automatically on PureData System for Analytics N200x and N3001 is as follows:
1. Change to user nz:
[root@nzhost1 ~]# su – nz
2. Check to see if Call Home is enabled:
[nz@nzhost1 ~]$ nzcallhome -status
If enabled, disable it:
[nz@nzhost1 ~]$ nzcallhome –off
Note: Ensure that nzcallhome returns status as disabled. If there are errors in the callHome.txt configuration file, errors are listed in the output, and call-Home is disabled.
3. Check the state of the Netezza system:
[nz@nzhost1 ~]$ nzstate
4. If the system state is online, stop the system using the command:
[nz@nzhost1 ~]$ nzstop
5. Wait for the system to stop, using the command:
[nz@nzhos1t ~]$ nzstate
System state is 'Stopped'.
6. Exit from the nz session to return to user root:
[nz@nzhost1 ~]$ exit
7. Logged into the active host as root, type the following commands to stop the heartbeat processes:
[root@nzhost1 ~]# ssh ha2 /sbin/service heartbeat stop
[root@nzhost1 ~]# /sbin/service heartbeat stop
8. Run below commands as a root user to disable heartbeat from startup:
[root@nzhost1 ~]# ssh ha2 /sbin/chkconfig heartbeat off
[root@nzhost1 ~]# /sbin/chkconfig heartbeat off
9. Type the following commands to stop the DRBD processes:
[root@nzhost1 ~]# ssh ha2 /sbin/service drbd stop
[root@nzhost1 ~]# /sbin/service drbd stop
10. Run below commands as a root user to disable drbd from startup:
[root@nzhost1 ~]# ssh ha2 /sbin/chkconfig drbd off
[root@nzhost1 ~]# /sbin/chkconfig drbd off
Execute below steps using "root" user on both ha1/ha2 hosts
Step 1: Check if gspca_ov519 and gspca_stv06xx are loaded in the hosts
lsmod | grep gspca_ov519
lsmod | grep gspca_stv06xx
example:
[root@ nzhost1 ~]# lsmod | grep gspca_ov519
gspca_ov519 39183 0
gspca_main 25864 2 gspca_ov519,gspca_stv06xx
[root@ nzhost1 ~]# lsmod | grep gspca_stv06xx
gspca_stv06xx 26519 0
gspca_main 25864 2 gspca_ov519,gspca_stv06xx
Note: If there is no output skip Step 2, and proceed with Step 3
Step 2: Unload gspca_ov519 and gspca_stv06xx module
modprobe -rv gspca_ov519
modprobe -rv gspca_stv06xx
example:
[root@nzhost1 ~]# modprobe -rv gspca_ov519
rmmod /lib/modules/2.6.32-754.31.1.el6.x86_64/kernel/drivers/media/video/gspca/gspca_ov519.ko
[root@nzhost1 ~]# modprobe -rv gspca_ov519
rmmod /lib/modules/2.6.32-754.31.1.el6.x86_64/kernel/drivers/media/video/gspca/stv06xx/gspca_stv06xx.ko
rmmod /lib/modules/2.6.32-754.31.1.el6.x86_64/kernel/drivers/media/video/gspca/gspca_main.ko
rmmod /lib/modules/2.6.32-754.31.1.el6.x86_64/kernel/drivers/media/video/videodev.ko
rmmod /lib/modules/2.6.32-754.31.1.el6.x86_64/kernel/drivers/media/video/v4l2-compat-ioctl32.ko
The output shows that gspca_ov519 and gspca_ov519, their dependent modules are unloaded in the reverse order that they are loaded, given that no processes depend on any of the modules being unloaded.
Step 3: To prevent a module from being loaded directly you add the blocklist line to a configuration file specific to the system configuration.
echo "blocklist gspca_ov519" >> /etc/modprobe.d/local-blocklist.conf
echo "blocklist gspca_stv06xx" >> /etc/modprobe.d/local-blocklist.conf
example :
[root@nzhost1 ~]# echo "blocklist gspca_ov519" >> /etc/modprobe.d/local-blocklist.conf
[root@nzhost1 ~]# echo "blocklist gspca_stv06xx" >> /etc/modprobe.d/local-blocklist.conf
[root@nzhost1 ~]# cat /etc/modprobe.d/local-blocklist.conf | grep gspca_ov519
blocklist gspca_ov519
[root@nzhost1 ~]# cat /etc/modprobe.d/local-blocklist.conf | grep gspca_stv06xx
blocklist gspca_stv06xx
Step 4: Kernel modules can be loaded directly or loaded as a dependency from another module
To prevent installation as a dependency from another module follow below step:
echo "install gspca_ov519 /bin/false" >> /etc/modprobe.d/local-blocklist.conf
echo "install gspca_stv06xx /bin/false" >> /etc/modprobe.d/local-blocklist.conf
example:
[root@nzhost1 ~]# echo "install gspca_ov519 /bin/false" >> /etc/modprobe.d/local-blocklist.conf
[root@nzhost1 ~]# echo "install gspca_stv06xx /bin/false" >> /etc/modprobe.d/local-blocklist.conf
[root@nzhost1 ~]# cat /etc/modprobe.d/local-blocklist.conf | grep gspca_ov519
blocklist gspca_ov519
install gspca_ov519 /bin/false
[root@nzhost1 ~]# cat /etc/modprobe.d/local-blocklist.conf | grep gspca_stv06xx
blocklist gspca_stv06xx
install gspca_stv06xx /bin/false
The install line simply causes /bin/false to be run instead of installing a module.
Step 5: Make a backup copy of your initramfs.
cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
Example:
[root@nzhost1 ~]# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
[root@nzhost1 ~]# uname -r
2.6.32-754.31.1.el6.x86_64
[root@nzhost1 ~]# ll /boot/initramfs-2.6.32-754.31.1.el6.x86_64.img.08-17-105347.bak
-rw------- 1 root root 21881438 Aug 17 10:53 /boot/initramfs-2.6.32-754.31.1.el6.x86_64.img.08-17-105347.bak
Step 6: If the kernel module is part of the initramfs (boot configuration), rebuild your initial ramdisk image, omitting the module to be avoided
dracut --omit-drivers gspca_ov519 -f
dracut --omit-drivers gspca_stv06xx -f
example:
[root@nzhost1 ~]# dracut --omit-drivers gspca_ov519 -f
[root@nzhost1 ~]# dracut --omit-drivers gspca_stv06xx -f
[root@nzhost1 ~]# lsinitrd /boot/initramfs-2.6.32-754.31.1.el6.x86_64.img | grep gspca_ov519
[root@nzhost1 ~]# lsinitrd /boot/initramfs-2.6.32-754.31.1.el6.x86_64.img | grep gspca_stv06xx
Step 7: Append module_name.blocklist to the kernel cmdline. We give it an invalid parameter of blocklist and set it to 1 as a way to preclude the kernel from loading it.
sed --follow-symlinks -i '/\s*kernel \/vmlinuz/s/$/ gspca_ov519.blocklist=1/' /etc/grub.conf
sed --follow-symlinks -i '/\s*kernel \/vmlinuz/s/$/ gspca_stv06xx.blocklist=1/' /etc/grub.conf
example :
[root@nzhost1 ~]# sed --follow-symlinks -i '/\s*kernel \/vmlinuz/s/$/ gspca_ov519.blocklist=1/' /etc/grub.conf
[root@nzhost1 ~]# sed --follow-symlinks -i '/\s*kernel \/vmlinuz/s/$/ gspca_stv06xx.blocklist=1/' /etc/grub.conf
Step 8: blocklist the kernel module in kdump's configuration file.
echo "blocklist gspca_ov519" >> /etc/kdump.conf
echo "blocklist gspca_stv06xx" >> /etc/kdump.conf
example:
[root@nzhost1 ~]# echo "blocklist gspca_ov519" >> /etc/kdump.conf
[root@nzhost1 ~]# echo "blocklist gspca_stv06xx" >> /etc/kdump.conf
[root@nzhost1 ~]# cat /etc/kdump.conf | grep gspca_ov519
blocklist gspca_ov519
[root@nzhost1 ~]# cat /etc/kdump.conf | grep gspca_stv06xx
blocklist gspca_stv06xx
Note: Perform Step 9 if kexec-tools is installed and kdump is configured else continue with Step 10.
Perform below commands to check if kexec-tools is installed and Kdump is operational
[root@nzhost1 ~]# rpm -qa | grep kexec-tools
[root@nzhost1 ~]# service kdump status
Step 9: Restart the kdump service to pick up the changes to kdump's initrd.
service kdump restart
example:
[root@nzhost1 ~]# service kdump restart
Stopping kdump: [ OK ]
Starting kdump: [ OK ]
Step 10: Reboot the system at a convenient time to have the changes take effect.
Make sure the secondary host is up by pinging or logging in before rebooting the primary host.
/sbin/shutdown -r now
example:
[root@nzhost1 ~]# /sbin/shutdown -r now
Make sure the primary server comes up and is reachable before performing Mitigation steps on the secondary server.
After applying the mitigation:
1. Start the services using following:
[root@nzhost1 ~]# service heartbeat start
[root@nzhost1 ~]# ssh ha2 service heartbeat start
[root@nzhost1 ~]# service drbd start
[root@nzhost1 ~]# ssh ha2 service drbd start
2. Check the stat of the system. Type:
[root@nzhost1 ~]# crm_mon -i5
Result: When the cluster manager comes up and is ready, status appears as follows.
Make sure that nzinit has started before you proceed. (This could take a few minutes.)
Node: nps61074 (e890696b-ab7b-42c0-9e91-4c1cdacbe3f9): online
Node: nps61068 (72043b2e-9217-4666-be6f-79923aef2958): online
Resource Group: nps
drbd_exphome_device(heartbeat:drbddisk): Started nps61074
drbd_nz_device(heartbeat:drbddisk): Started nps61074
exphome_filesystem(heartbeat::ocf:Filesystem): Started nps61074
nz_filesystem (heartbeat::ocf:Filesystem): Started nps61074
fabric_ip (heartbeat::ocf:IPaddr): Started nps61074
wall_ip (heartbeat::ocf:IPaddr): Started nps61074
nzinit (lsb:nzinit): Started nps61074
fencing_route_to_ha1(stonith:apcmaster): Started nps61074
fencing_route_to_ha2(stonith:apcmaster): Started nps61068
3. From host 1 (ha1), press Ctrl+C to break out of crm_mon.
4. Turn on heartbeat and DRBD using the chkconfig:
ssh ha2 /sbin/chkconfig drbd on
/sbin/chkconfig drbd on
ssh ha2 /sbin/chkconfig heartbeat on
/sbin/chkconfig heartbeat on
Get Notified about Future Security Bulletins
References
Acknowledgement
Change History
18 Aug 2020: Original Publication
*The CVSS Environment Score is customer environment specific and will ultimately impact the Overall CVSS Score. Customers can evaluate the impact of this vulnerability in their environments by accessing the links in the Reference section of this Security Bulletin.
Disclaimer
Review the IBM security bulletin disclaimer and definitions regarding your responsibilities for assessing potential impact of security vulnerabilities to your environment.
Document Location
Worldwide
Was this topic helpful?
Document Information
Modified date:
23 September 2020
UID
ibm16261415