Performance testing
When you are getting started with your IBM Sterling Order Management System implementation, complete all performance testing and have the testing signed off a minimum of 2 weeks before your services launch. This time frame ensures that your team has time to address any performance issues that are found during the testing phase.
As part of your performance testing for getting started, provide a test summary report to the IBM Sterling Order Management System operations team. Include within this report the tested peak volumes and load. The peak volumes and load must correspond to the expected volumetric that is listed within your IBM Sterling Order Management System statement of work. If your contract is based on "n" number of order lines at peak, complete performance testing up to that limit. If you, or your business partner, test beyond those contracted peak volumes, you assume all performance risk from the testing.
Defining a test plan
- Establish workload mix and volumes. Use data from web server logs and other tools to work with
predicted volumes, year-to-year growth, predicted peak hour volumes, and more. Important: Define and clearly document the workload and volume definitions before you develop your load scripts.
- Validate customer non-functional requirements. Define any non-functional requirements during the design phase.
- Define all entry and exit criteria to ensure that your application code is stable.
Test phases and the iterative testing process
- Single user testing
- Detailed code analysis
- Pathlength
- Memory footprint
- Basic SQL structure
- Application architecture
- Single system concurrency testing
- Single system
- Drive full workload mix
- Iterative test and fix/tune
- Establish tuned reference
- Incremental scale testing
- Incrementally grow farm
- Tune the single system.
- Iterative fixing and tuning of the system.
- Stability testing
- Failover
- Long runs
- Baseline
- What the system does now
- Essential for measuring improvement
- Minimal adjustment
- Limit change between runs
- Test
- Sufficient duration
- Measure at steady-state
- Observe
- Look at all systems and logs
- Record results
- Plan for next test
Test environments, data, and responsibilities for functional and performance testing
Responsibility | Environment | Data | |
---|---|---|---|
Unit testing | Implementation team | Developer toolkit environment | Mocked |
Function testing | Implementation team | Quality assurance | Mocked |
User acceptance testing | You or your business partner services team | Preproduction and production Quality assurance * |
Valid business data |
Performance testing | You or your business partner services team | Quality assurance (initial performance testing) Preproduction (with DynaCache enabled) |
Valid business data |
Component failover testing | Implementation team | Production | Valid business data |
- Profiling Java code to identify under-performing functionality or modules, and to identify inefficiencies in designs and code.
- SQL tracing and analysis to trace SQL activity and isolate and review SQL performance bottlenecks.
- Verifying DynaCache implementation to ensure that all functionality that can be cached is being accurately and effectively cached
- Profiling JavaScript code to identify memory leak and inefficiencies in designs and code.
- Request analysis to identify duplicate or unnecessary requests.