Performance testing

Performance testing is a vital step in deploying changes to the environments. For all final performance testing, complete the testing in the preproduction environment.

When you are getting started with your IBM Sterling Order Management System implementation, complete all performance testing and have the testing signed off a minimum of 2 weeks before your services launch. This time frame ensures that your team has time to address any performance issues that are found during the testing phase.

As part of your performance testing for getting started, provide a test summary report to the IBM Sterling Order Management System operations team. Include within this report the tested peak volumes and load. The peak volumes and load must correspond to the expected volumetric that is listed within your IBM Sterling Order Management System statement of work. If your contract is based on "n" number of order lines at peak, complete performance testing up to that limit. If you, or your business partner, test beyond those contracted peak volumes, you assume all performance risk from the testing.

Defining a test plan

As part of ensuring the performance of your IBM Sterling Order Management System service, create performance test plans as part of getting started and for implementing any changes. To create a performance test plan, the following tasks must be completed:
  • Establish workload mix and volumes. Use data from web server logs and other tools to work with predicted volumes, year-to-year growth, predicted peak hour volumes, and more.
    Important: Define and clearly document the workload and volume definitions before you develop your load scripts.
  • Validate customer non-functional requirements. Define any non-functional requirements during the design phase.
  • Define all entry and exit criteria to ensure that your application code is stable.

Test phases and the iterative testing process

  1. Single user testing
    • Detailed code analysis
    • Pathlength
    • Memory footprint
    • Basic SQL structure
    • Application architecture
  2. Single system concurrency testing
    • Single system
    • Drive full workload mix
    • Iterative test and fix/tune
    • Establish tuned reference
  3. Incremental scale testing
    • Incrementally grow farm
    • Tune the single system.
    • Iterative fixing and tuning of the system.
  4. Stability testing
    • Failover
    • Long runs
The following diagram shows the high-level iterative test and tuning process.
Image that shows the iterative testing process.
  • Baseline
    • What the system does now
    • Essential for measuring improvement
  • Minimal adjustment
    • Limit change between runs
  • Test
    • Sufficient duration
    • Measure at steady-state
  • Observe
    • Look at all systems and logs
    • Record results
    • Plan for next test

Test environments, data, and responsibilities for functional and performance testing

  Responsibility Environment Data
Unit testing Implementation team Developer toolkit environment Mocked
Function testing Implementation team Quality assurance Mocked
User acceptance testing You or your business partner services team Preproduction and production
Quality assurance *
Valid business data
Performance testing You or your business partner services team Quality assurance (initial performance testing)
Preproduction (with DynaCache enabled)
Valid business data
Component failover testing Implementation team Production Valid business data
* Functional user acceptance testing can be done in the quality assurance environment if the production environment data is not ready for use in testing.
Complete any application profiling within the developer toolkit environment, such as when you are completing the following tasks:
  • Profiling Java code to identify under-performing functionality or modules, and to identify inefficiencies in designs and code.
  • SQL tracing and analysis to trace SQL activity and isolate and review SQL performance bottlenecks.
  • Verifying DynaCache implementation to ensure that all functionality that can be cached is being accurately and effectively cached
  • Profiling JavaScript code to identify memory leak and inefficiencies in designs and code.
  • Request analysis to identify duplicate or unnecessary requests.