Multi-tenancy support (optional)

Organizations are sometimes faced with a need to deploy and manage multiple ICFM instances for distinct entities called tenants. For example, organizations might host ICFM on behalf of their clients, but find it inefficient or too costly to host entirely separate ICFM deployments (separate, licensed installations of DB2, WebSphere Application Server, MQ, and so on). For these organizations, multi-tenancy support is a welcome alternative.

An ICFM tenant is a discrete runtime that uses instances of software components that are sourced from the single, shared ICFM installation infrastructure. (Think of a typical ICFM installation as containing a 'base tenant' out of the box.) An ICFM administrator can manage and maintain the shared ICFM installation infrastructure as a single entity. An ICFM (or tenant-specific) administrator can also manage tenant resources.

Tip: Depending on your requirements, you can create multiple, additional tenants.
Consider these key characteristics of multi-tenancy support when making deployment decisions:
Isolation and data separation between tenants
The data of each tenant is isolated, such that no tenant can access the data of any other tenant. This applies to the business data being hosted by ICFM for each tenant as well as the analytics deployed by each tenant, the configuration of each tenant, the audit history of each tenant, and the investigations conducted by each tenant. Keep in mind that since tenants cannot see each other, they cannot perform analysis across the data sets of tenants, nor can the host perform analysis across multiple tenants.
Tenants are managed together, but execute independently
Tenants use a common, shared set of infrastructure for multiple ICFM deployments. Therefore, instead of installing and managing a separate DB2 installation for each ICFM instance, a tenant obtains a separate DB2 instance. Thus, the organization shares a single DB2 installation environment across multiple ICFM deployments, but each tenant gets a discrete, isolated database instance.
Tenant workloads can be compatible (day versus night) or conflicting
Inherent in the sharing of infrastructure across ICFM tenants is competition between the computation and data access loads associated with those tenants. This might include direct competition, such as two tenants importing large data sets at the same time, or indirect competition that is caused by one tenant importing data while another is performing analytics. Success in sharing infrastructure across ICFM tenants requires understanding of the runtime loads of these tenants, and how those loads are distributed over time to optimize the shared infrastructure accordingly. As an example, tenant DB2 instances compete for the same resources (CPU, I/O, network bandwidth, and so on) of the host ICFM Data server, making monitoring and management of the data server more complex.

When using these systems to host multiple ICFM tenants, each system hosts a component for each ICFM tenant, as outlined in the following diagram.

Diagram of multi-tenant server components
Attention: Multi-tenancy is supported only in a three-server ICFM environment.

 1  Single or multiple MQ backbones for high-speed, component-to-component communication

Message queue isolation is achieved through the use of multiple queue managers within a single MQ installation. IBM MQ is used extensively to route calls between ICFM components, and follows a similar pattern to other ICFM components, with one MQ installation servicing multiple tenants. This MQ installation hosts a distinct queue manager and queues for each tenant, ensuring isolation between processing workloads and eliminating the possibility of cross-contamination between tenants.

As with other ICFM components, if loads dictate, you can deploy multiple MQ installations; for example, provide a separate MQ installation for a tenant with especially high message rates.

 2  Single or multiple ICFM engine instances that service multiple tenants

A separate application server cluster (consisting of a single application server) is created for each tenant and federated into the ICFM administrative cell. ICFM 'base' applications are installed on the tenant application server.

 3  Single or multiple analytic choreography backbone operating against multiple analytic configuration stores - one per tenant

Analytics can be run in any location when they are “wrapped” by an analysis flow, which is registered with ICFM. These analysis flows, in turn, can be written in anything (for example, IIB or Java MDB) if they accept and return the XML message structures expected by ICFM. This means that an analytic job could run on the mainframe or any other location if the job is called by an ICFM-registered analysis flow.

 4  Single or multiple servers according to load. This can be one server that services N tenants, one server per tenant, multiple servers servicing a single (large) tenant, or other combinations

With regards to the analytic engines, the common patterns are to use SPSS and ODM, both of which can host multiple ICFM analytic tenants, whether within a single shared instance, or if loads dictate, a separate instance for tenants with exceptional performance needs. SPSS can effectively farm analytic workloads across a set of SPSS instances load balancing as required.

Out of the box, ICFM provides single, non-tenant-specific implementations of SPSS and ODM.

 5  Multiple database server instances - one per tenant

Isolating the data structures of each ICFM tenant into a separate database instance allows for separation of user credentials, data persistence, table spaces, and (to some degree) load isolation. When an ICFM tenant is added, a new instance of Counter Fraud database (suitably named for that tenant) and any required, default population of values in that Counter Fraud database is created. Updates to the hosting DB2 environment, load balancing of the DB2 instances, and backups of tenants, are all performed within a single DB2 installation for all tenants.