Configuring an App Host on Microsoft Azure

Configure an App host in Microsoft Azure by using the provided image.

Before you begin

Important:

The following procedure is for the configuration of an IBM® QRadar® 7.3.3 App Host image, which has reached its End of Support. An IBM® QRadar® 7.4.3 App Host image is not yet available. Once the image is installed, it should be upgraded to ensure that support is available. For information about upgrading to 7.4.3, see Upgrading QRadar SIEM.

You must acquire entitlement to a QRadar Software Node for any QRadar instance that is deployed from a third-party cloud marketplace. Entitlement to the software node should be in place before you deploy the QRadar instance. To acquire entitlement to a QRadar Software Node, contact your QRadar Sales Representative.

For any issues with QRadar software, engage IBM Support. If you experience any problems with Microsoft Azure infrastructure, refer to Microsoft Azure Support documentation. If IBM Support determines that your issue is caused by the Microsoft Azure infrastructure, you must contact Microsoft for support to resolve the underlying issue with the Microsoft Azure infrastructure.

You must use static IP addresses.

You cannot have more than two DNS entries. QRadar installation fails if you have more than two DNS entries in the /etc/resolv.conf file.

The App Host must be the same version as your Console before you can add the App Host to your deployment. You can upgrade the App Host to a later version of QRadar after you complete the installation by downloading the fix pack from Fix Central (https://www.ibm.com/support/fixcentral) and following the normal upgrade procedure. For more information about upgrades, see IBM QRadar Upgrade Guide.

If you are installing a data gateway for QRadar on Cloud, go to Installing a QRadar data gateway in Microsoft Azure (https://www.ibm.com/support/knowledgecenter/en/SSKMKU/com.ibm.qradar.doc_cloud/t_hosted_azure.html).

If you deploy a managed host and a Console in the same virtual network, use the private IP address of the managed host to add it to the Console.

If you deploy a managed host and a Console in different virtual networks, you must allow firewall rules for the communication between the Console and the managed host. For more information, see QRadar port usage.

You must complete all of the installation steps before you run QRadar commands such as qchange_netsetup.

For more information about configuring firewall rules between hosts, see Microsoft documentation.

Procedure

  1. Go to the Microsoft Azure Marketplace (https://azuremarketplace.microsoft.com/en-us/marketplace/apps/ibm.qradar733?tab=Overview).
    Note: The Plans + Pricing tab can be used to estimate pricing for certain VM sizes, but you don't choose your VM size on this screen. Refer to the Core and RAM columns when you are estimating pricing. Ignore the Disk Space column, as all QRadar marketplace images include a disk for the operating system, and a 1 TB disk for storage.
  2. Click Get It Now.
  3. Select QRadar SIEM AH 7.3.3 from the Software plan menu list and click Continue.
  4. Click Create to create an instance of the virtual appliance.
  5. Configure VM settings.
    1. Select an existing Resource Group or create a new one.
    2. Enter a virtual machine name.
      Note: The VM name must be 10 characters or fewer.
    3. Select a Region.
    4. Click Change size and ensure that your VM meets the minimum system requirements.
    5. Enter a username for the administrator account.
    6. Choose an SSH public key or Password.

      For more information about creating and using an SSH public-private key pair for Linux® VMs in Microsoft Azure, see Microsoft documentation.

    7. Set Public inbound ports to Allow selected ports.
    8. Set Select inbound ports to SSH (22) and HTTPS (443).
  6. Click Review + Create.
  7. Click Create to deploy the instance.
  8. When your VM is deployed in Microsoft Azure, set the private and public IP addresses to static.
    1. Click Go to resource.
    2. Click the public IP address.
    3. Set the Assignment to Static.
    4. Click Save.
    5. Click Overview.
    6. Click the Associated to link.
    7. Click IP configurations.
    8. In the list of IP configurations, click the configuration row where the Type is set to Primary.
    9. Set the Private IP address assignment to Static.
    10. Click Save.
  9. Create or select a security group that allows ports 22 and 443 only from trusted IP addresses to create an allowlist of IP addresses that can access your QRadar deployment.
    In a QRadar deployment with multiple appliances, other ports might also be allowed between managed hosts. For more information about what ports might need to be allowed in your deployment, see Common ports and servers used by QRadar.
    1. Click Home.
    2. Click Virtual Machines .
    3. Click the name of your virtual machine.
    4. Click Networking.
    5. Click the SSH rule that is associated with port 22.
    6. In the edit pane, select IP Addresses from the Source list.
    7. In the Source IP addresses/CIDR ranges field, enter the address range of the IP addresses that are allowed to access the VM.
    8. Click Save.
    9. Click the HTTPS rule that is associated with port 443.
    10. In the edit pane, select IP Addresses from the Source list.
    11. In the Source IP addresses/CIDR ranges field, enter the address range of the IP addresses that are allowed to access the VM.
    12. Click Save.
  10. To display the SSH connection information for the public IP address of the virtual appliance.
    1. Click Virtual Machines > <virtual_machine_name>.
    2. Click Connect.
  11. Log in to your virtual machine.
    • To log in using SSH and your key pair, type the following command:
      ssh -i <key.pem> user@<public_IP_address>
    • To log in using SSH and your password, type the following command:
      ssh user@<public_IP_address>
  12. To check that the hostname is a fully qualified domain name (FQDN), type the following command:
    hostname -f

    If the command returns a hostname that is not an FQDN, DNS is misconfigured and installation fails. Restart this procedure with proper DNS configuration. For more information about DNS configuration, see the Microsoft Azure Support documentation.

  13. To check the length of your FQDN, type the following command:
    hostname -f | wc -c

    If the command returns a value greater than 63, installation fails. Restart this procedure with a shorter virtual machine name.

  14. Ensure that there are no more than 2 DNS entries for the instance by typing the following command:
    grep nameserver /etc/resolv.conf | wc -l 
    If the command returns 3 or higher, edit /etc/resolv.conf to remove all but two of the entries before you proceed to the next step. You will add the entries back after installation is complete.

What to do next

If you need to increase file system storage beyond the default 1 TB, follow the steps in Increasing file system storage for a new App Host by recreating the data disk at a larger size. Increase the file system storage before you complete the installation if possible, as increasing file system storage on a running system is more risky than increasing it before installation is complete.

If you don't need more than 1 TB of storage, proceed to Installing the App Host.

If you need to change your hostname or FQDN, run the qchange_netsetup command.

Increasing file system storage for a new App Host by recreating the data disk at a larger size

Increase the size of the file system on the App Host by recreating the existing data disk at a larger size and by using the Red Hat LVM logical volume manager.

Before you begin

For more information about expanding the size of a disk, see Microsoft documentation.

About this task

Warning: This procedure is for new installations only, and must be complete before completing the steps in Installing the App Host. Following these steps after installation is complete will result in errors and data loss.

Procedure

  1. Stop your virtual machine (VM).
  2. Click Disks.
    Warning: Do not add more disks.
    To increase storage to less than 4095 GiB:
    1. Click on the data disk link.
    2. Click Size + performance.
    3. Choose from the list, or enter the new disk size in GiB.
    4. Click Save.
    To increase storage to more than 4095 GiB:
    1. Click Edit.
    2. Click the X next to the data disk to detach the disk.
    3. Click Save.
    4. Click Home.
    5. Click Disks.
    6. Click the disk associated with the VM that you are editing.
    7. Click Size + performance.
    8. Enter the new disk size in GiB.
    9. Click Save.
    10. Go to the Home screen and click Virtual machines
    11. Click the name of your virtual machine.
    12. Click Disks.
    13. Click + Add data disk.
    14. Select the disk that you modified.
    15. Click Save.
  3. After the data disk is expanded, restart your VM.
  4. Log in to your VM by using ssh.
  5. Determine the device name and partition number for the /store and /transient file systems by typing the following command:
    lsblk

    In this example lsblk output, for the /store and /transient file systems the <device_name> is  sdc , the <partition_number> is  1 , and the <volume_group> is  data .

    NAME                   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    fd0                      2:0    1     4K  0 disk 
    sda                      8:0    0    98G  0 disk 
    ├─sda1                   8:1    0     1G  0 part /boot
    ├─sda2                   8:2    0    20G  0 part /
    ├─sda3                   8:3    0   200M  0 part /boot/efi
    ├─sda4                   8:4    0     1K  0 part 
    └─sda5                   8:5    0  76.8G  0 part 
      ├─rhel-var           253:0    0     8G  0 lvm  /var
      ├─rhel-var_log       253:1    0    18G  0 lvm  /var/log
      ├─rhel-temp          253:2    0     8G  0 lvm  /temp
      ├─rhel-storetmp      253:3    0    15G  0 lvm  /storetmp
      ├─rhel-opt           253:4    0    14G  0 lvm  /opt
      ├─rhel-home          253:5    0     6G  0 lvm  /home
      └─rhel-var_log_audit 253:6    0   7.8G  0 lvm  /var/log/audit
    sdb                      8:16   0    32G  0 disk 
    └─sdb1                   8:17   0    32G  0 part /mnt/resource
     sdc                       8:32   0     6T  0 disk 
    └─sdc 1                    8:33   0  1022G  0 part 
      ├─ data -transient     253:7    0 204.4G  0 lvm  /transient
      └─ data -store         253:8    0 817.6G  0 lvm  /store
  6. Become the super user by typing the following command and entering your password when promoted:
    sudo -i
  7. Open the parted prompt by typing the following command:
    parted /dev/<device_name>

    Example command:

    parted /dev/sdc
  8. Switch the units displayed to MiB by typing the following command:
    unit mib
    p
  9. When prompted with Error: The backup GPT table is not at the end of the disk..., enter Fix.
  10. When prompted with Warning: Not all of the space available ..., enter Fix.
  11. Resize the partition to fill the disk by typing the following command:
    resizepart <partition_number> 100%

    Example command:

    resizepart 1 100%
  12. Exit parted by typing the following command:
    quit
  13. Ensure that the kernel recognizes the new partition information by typing the following command:
    partprobe /dev/<device_name><partition_number>

    Example command:

    partprobe /dev/sdc1
    There is no output for this step if it is successful.
    • If there is no output, proceed directly to the next step.
    • If there is output that indicates that partprobe didn't detect the new partitions, reboot the system before you continue to the next step.
  14. Grow the physical volume to fill the extra disk space by typing the following command:
    pvresize /dev/<device_name><partition_number>

    Example command:

    pvresize /dev/sdc1
    Example successful output:
    Physical volume "/dev/sdc1" changed
      1 physical volume(s) resized / 0 physical volume(s) not resized
  15. Expand /transient by 20% of the extra disk space by typing the following command:
    lvextend -l +20%FREE /dev/<volume_group>/transient

    Example command:

    lvextend -l +20%FREE /dev/data/transient
    Example successful output:
    Size of logical volume data/transient changed from <204.40 GiB (52326 extents) to 1.20 TiB (314573 extents).
      Logical volume data/transient successfully resized.
  16. Expand /store into the remaining extra disk space by typing the following command:
    lvextend -l +100%FREE /dev/<volume_group>/store

    Example command:

    lvextend -l +100%FREE /dev/data/store
    Example successful output:
    Size of logical volume data/store changed from <817.60 GiB (209305 extents) to 4.00 TiB (1048985 extents).
      Logical volume data/store successfully resized.
  17. Reformat the /store file system:
    1. Unmount the /store file system by typing the following command:
      umount /dev/mapper/<volume_group>-store

      Example command:

      umount /dev/mapper/data-store
    2. Construct the XFS file system for /store by typing the following command:
      mkfs.xfs -f /dev/mapper/<volume_group>-store

      Example command:

      mkfs.xfs -f /dev/mapper/data-store
      Example successful output:
      meta-data=/dev/mapper/data-store isize=512    agcount=5, agsize=268435455 blks
               =                       sectsz=4096  attr=2, projid32bit=1
               =                       crc=1        finobt=0, sparse=0
      data     =                       bsize=4096   blocks=1074160640, imaxpct=5
               =                       sunit=0      swidth=0 blks
      naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
      log      =internal log           bsize=4096   blocks=521728, version=2
               =                       sectsz=4096  sunit=1 blks, lazy-count=1
      realtime =none                   extsz=4096   blocks=0, rtextents=0
    3. Verify that the XFS file system for /store is not damaged by typing the following command:
      xfs_repair /dev/mapper/<volume_group>-store

      Example command:

      xfs_repair /dev/mapper/data-store
      Example successful output:
      Phase 1 - find and verify superblock...
      Phase 2 - using internal log
              - zero log...
              - scan filesystem freespace and inode maps...
              - found root inode chunk
      Phase 3 - for each AG...
              - scan and clear agi unlinked lists...
              - process known inodes and perform inode discovery...
              - agno = 0
              - agno = 1
              - agno = 2
              - agno = 3
              - agno = 4
              - process newly discovered inodes...
      Phase 4 - check for duplicate blocks...
              - setting up duplicate extent list...
              - check for inodes claiming duplicate blocks...
              - agno = 0
              - agno = 1
              - agno = 3
              - agno = 4
              - agno = 2
      Phase 5 - rebuild AG headers and trees...
              - reset superblock...
      Phase 6 - check inode connectivity...
              - resetting contents of realtime bitmap and summary inodes
              - traversing filesystem ...
              - traversal finished ...
              - moving disconnected inodes to lost+found ...
      Phase 7 - verify and correct link counts...
      done
    4. Mount the /store file system by typing the following command:
      mount /dev/mapper/<volume_group>-store

      Example command:

      mount /dev/mapper/data-store
  18. Reformat the /transient file system:
    1. Unmount the /transient file system by typing the following command:
      umount /dev/mapper/<volume_group>-transient

      Example command:

      umount /dev/mapper/data-transient
    2. Construct the XFS file system for /transient by typing the following command:
      mkfs.xfs -f /dev/mapper/<volume_group>-transient

      Example command:

      mkfs.xfs -f /dev/mapper/data-transient
      Example successful output:
      meta-data=/dev/mapper/data-transient isize=512    agcount=4, agsize=80530688 blks
               =                       sectsz=4096  attr=2, projid32bit=1
               =                       crc=1        finobt=0, sparse=0
      data     =                       bsize=4096   blocks=322122752, imaxpct=5
               =                       sunit=0      swidth=0 blks
      naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
      log      =internal log           bsize=4096   blocks=157286, version=2
               =                       sectsz=4096  sunit=1 blks, lazy-count=1
      realtime =none                   extsz=4096   blocks=0, rtextents=0
    3. Verify that the XFS file system for /transient is not damaged by typing the following command:
      xfs_repair /dev/mapper/<volume_group>-transient

      Example command:

      xfs_repair /dev/mapper/data-transient
      Example successful output:
      Phase 1 - find and verify superblock...
      Phase 2 - using internal log
              - zero log...
              - scan filesystem freespace and inode maps...
              - found root inode chunk
      Phase 3 - for each AG...
              - scan and clear agi unlinked lists...
              - process known inodes and perform inode discovery...
              - agno = 0
              - agno = 1
              - agno = 2
              - agno = 3
              - process newly discovered inodes...
      Phase 4 - check for duplicate blocks...
              - setting up duplicate extent list...
              - check for inodes claiming duplicate blocks...
              - agno = 0
              - agno = 1
              - agno = 2
              - agno = 3
      Phase 5 - rebuild AG headers and trees...
              - reset superblock...
      Phase 6 - check inode connectivity...
              - resetting contents of realtime bitmap and summary inodes
              - traversing filesystem ...
              - traversal finished ...
              - moving disconnected inodes to lost+found ...
      Phase 7 - verify and correct link counts...
      done
    4. Mount the /transient file system by typing the following command:
      mount /dev/mapper/<volume_group>-transient

      Example command:

      mount /dev/mapper/data-transient
  19. Verify that the new sizes of the expanded file systems are correct by typing the following command:
    df -h
    Example successful output:
    Filesystem                      Size  Used Avail Use% Mounted on
    /dev/sda2                        20G  1.2G   18G   7% /
    devtmpfs                        7.9G     0  7.9G   0% /dev
    tmpfs                           7.9G     0  7.9G   0% /dev/shm
    tmpfs                           7.9G  9.1M  7.9G   1% /run
    tmpfs                           7.9G     0  7.9G   0% /sys/fs/cgroup
    /dev/sda1                       976M  127M  783M  14% /boot
    /dev/sda3                       200M  8.0K  200M   1% /boot/efi
    /dev/mapper/rhel-var            8.0G  178M  7.9G   3% /var
    /dev/mapper/rhel-opt             14G  3.5G   11G  25% /opt
    /dev/mapper/rhel-storetmp        15G   33M   15G   1% /storetmp
    /dev/mapper/rhel-temp           8.0G   33M  8.0G   1% /temp
    /dev/mapper/rhel-home           6.0G   33M  6.0G   1% /home
    /dev/mapper/rhel-var_log         18G   44M   18G   1% /var/log
    /dev/mapper/rhel-var_log_audit  7.8G   70M  7.8G   1% /var/log/audit
    /dev/sdb1                        32G   13G   20G  38% /mnt/resource
    tmpfs                           1.6G     0  1.6G   0% /run/user/1000
    /dev/mapper/data-store          4.0T   33M  4.0T   1% /store
    /dev/mapper/data-transient      1.2T   33M  1.2T   1% /transient
  20. Reboot the VM.
  21. Log in to your virtual machine.
    • To log in using SSH and your key pair, type the following command:
      ssh -i <key.pem> user@<public_IP_address>
    • To log in using SSH and your password, type the following command:
      ssh user@<public_IP_address>

Results

If you increased the file system storage, you may see the following warning when you log in to the system:

WARNING:*******************************************************************
WARNING: QRadar requires 4092M of swap space but was only able to find
WARNING: 0M, please increase swap space by at least 4092M. Without this
WARNING: additional swap space, some components of QRadar will not function
WARNING: properly (such as complex queries or reports). Please contact
WARNING: Customer Support for further details and assistance in resolving
WARNING: this issue.
WARNING:*******************************************************************

This warning after increasing file system storage on a new VM in Microsoft Azure is benign. This warning is displayed because the swap space for the VM is being updated in the Microsoft Azure infrastructure. You can proceed with the installation.

What to do next

Follow the steps in Installing the App Host.

Installing the App Host

Procedure

  1. Type the following command to install the App Host:
    sudo /root/setup_apphost
  2. The system prompts you to set a root password. Set a strong root password that meets the following criteria.
    • Contains at least 5 characters
    • Contains no spaces
    • Can include the following special characters, unless you are installing a data gateway: @, #, ^, and *.
  3. Ensure that the App Host is the same version as your Console, then add the host to your deployment in QRadar.
    1. On the navigation menu ( Navigation menu icon ), click Admin.
    2. In the System Configuration section, click System and License Management.
    3. In the Display list, select Systems.
    4. On the Deployment Actions menu, click Add Host.
    5. Configure the settings for the managed host by providing a static IP address, and the root password to access the operating system shell on the appliance.
    6. Click Add.
    7. Optional: Use the Deployment actions > View Deployment menu to see visualizations of your deployment. You can download a PNG image or a Microsoft Visio (2010) VDX file of your deployment visualization.
    8. On the Admin tab, click Advanced > Deploy Full Configuration.
      Important: QRadar continues to collect events when you deploy the full configuration. When the event collection service must restart, QRadar does not restart it automatically. A message displays that gives you the option to cancel the deployment and restart the service at a more convenient time.

What to do next

If you removed any DNS entries in /etc/resolv.conf, restore them.

Important: IBM QRadar 7.3.3 has reached End of Support. To ensure that support is available, an upgrade must be performed. For information about upgrading to 7.4.3, see Upgrading QRadar SIEM.