The essentials Blend cloud and on-premises resources for flexibility and balance What is Hybrid Cloud? Hybrid Cloud solutions Unlock the value in your organization with Watson What is AI? AI solutions My IBM Log in

Respond faster to business demands

Engineered for agility, IBM Power E1080 delivers world record performance scalability for core enterprise workloads, protects your applications and data with pervasive and layered defense, and enables you to transform data into insights quickly.

IBM Power E1080 sets world record 8-socket single server SPEC CPU 2017 benchmark result

For the systems and workloads compared:

Power E1080 sets world record 8-socket performance SPECrate®2017_int_peak: 2170 vs 1620 SPECrate®2017_int_base: 1700 vs. 1570 2.5X more performance per core 1.3X more performance per socket Power E1080 servers scale to 16 sockets

8-socket single server SPEC CPU 2017 benchmark

Power E1080 (8-Socket, 120-Core)

HPE Superdome Flex 280 (8-Socket, 224-Core)

Power E1080 (8-Socket, 120-Core)HPE Superdome Flex 280 (8-Socket, 224-Core)System05001,0001,5002,000SPECrate®2017_int_peak

Notes:

Comparison based on single 8-socket systems (IBM Power E1080 3.55 - 4 GHz, 120 core, AIX and Superdome Flex 280 2.90 GHz, Intel Xeon Platinum 8380H) using published results at www.spec.org/cpu2017/results/ as of 02 September 2021. SPEC® and the benchmark names SPECrate®2017_int_base and SPECrate®2017_int_peak are registered trademarks of the Standard Performance Evaluation Corporation. For more information about SPEC CPU 2017, see www.spec.org/cpu2017/.

Modernize apps with OpenShift Container Platform on Power E1080

For the systems and workloads compared:

4.3x more throughput per core and 48% lower 3-yr TCO running containerized applications and database on a Power E1080 vs running the same containerized applications on an x86 server 4.1X better price-performance when running containerized applications and database on a Power E1080 vs running the same containerized applications on an x86 server IBM Power E1080 Activations New Intel Xeon SP based 2-socket server Server price
E1080 Linux core activations @ $2,000 / core x 20 cores
Intel server list price: 40-core Cascade Lake, 512GB
3-year warranty for IBM and Intel servers
$57,920 $41,900
Solution cost
Server + Virtualization + OpenShift Annual Subscription @ $2,000 / core x 3 years + WSHE @ $10,240 / core per year x 3 years
$792,320 $1,510,700
DayTrader workload
Total transactions per second (tps)
40,687 tps 18,852 tps
tps/$1000 51.35 tps/$1000 12.48 tps/$1000

Notes:

Based on IBM internal testing of Red Hat OpenShift Container Platform 4.8.2 worker nodes running 80 pods each with 10 users using the Daytrader7 workload (https://github.com/WASdev/sample.daytrader7/releases/tag/v1.4) accessing AIX Db2 databases. Average cpu utilization for the OCP worker nodes is ›95%. Comparison: E1080 with co-located OCP and AIX Db2 nodes versus OCP node on Cascade Lake accessing AIX Db2 node on E1080. Valid as of 8/25/2021 and conducted under laboratory conditions. Individual result can vary based on workload size, use of storage subsystems and other conditions. TCO is defined as hardware, software and maintenance costs over a period of three years. IBM Power E1080 (40 cores/3.8 GHz/2 TB memory) in maximum performance mode, 25 Gb Ethernet adapter(SRIOV), 1 x 16 Gbps FC adapter with PowerVM. Competitive system: Intel(R) Xeon(R) Gold 6248 CPU (Cascade Lake) in performance mode, 40 cores/3.9GHz/512GB memory), 25Gb Ethernet adapter(SRIOV) , 1 x 16Gbps FCA with KVM. Pricing is based on Power E1080 http://www-03.ibm.com/systems/power/hardware/linux-lc.html. Typical industry standard x86 pricing https://www.synnexcorp.com/us/govsolv/pricing/" IBM software pricing for Red Hat OpenShift and IBM WebSphere Hybrid Edition Monthly Subscription.

Streamline insights and automation

In-core inferencing and machine learning

MMA inferencing diagram

5X faster AI inferencing per socket versus Power E980 *

Bring your own models and run inference where your operational data resides Achieve faster AI inferencing per socket vs Power E980 Perform in-core AI inferencing and ML where data resides Provides alternative to using separate GPU systems Train AI models anywhere, deploy on Power without changes for AI with high RAS Support for popular libraries, AI frameworks and ONNX runtime

Notes:

5x improvement in per socket inferencing throughput for large size 32b floating point inferencing models from Power9 E980 (12-core modules) to Power10 E1080 (15-core modules). Based on IBM testing using Pytorch, OpenBLAS on the same BERT Large with SqUAD v1.1 data set.

Power E1080 and AIX 7.3 delivers inferencing at scale with H2O Driverless AI. Achieve 10X more users per node and 2.7X more inferences per second versus Power E980 and AIX 7.2

Power E10802 performance versus Power E9803

Supports 10X more users per node 2.7X more inferences per second

Model Object Optimized (MOJO) performance takes advantage of Power E1080 capabilities

480 threads per node 410 GB/s Memory Bandwidth per socket
H2O Driverless AI on Power10 E1080

Notes:

Based on IBM Internal Measurements running Mojo scoring pipeline performing inference on Db2 data records with MOJO model trained from https://raw.githubusercontent.com/IBM/telco-customer-churn-on-icp4d/master/data/Telco-Customer-Churn.csv using H2O Driverless AI Valid as of 8/27/21 E1080: 410,859 inferences per second with 800 users running on 1 node, 4 sockets, 60 cores, SMT8, 480 CPUs, AIX 7.3, DB2 11.5.6, Java IBM JDK 8.0.6-36; best performance was with 800 java mojo clients E980: 152,582 inferences per second with 80 users 1 node, 4 sockets, 40 cores, SMT8, 320 CPUs, AIX 7.2, DB2 11.5.5, Java AdoptOpenJDK 11.0.10+9; best performance was with 80 java mojo clients
Overview Annual report Corporate social responsibility Diversity & inclusion Financing Investor Newsroom Security, privacy & trust Senior leadership Careers with IBM Website Blog Publications Automotive Banking Consumer Good Energy Government Healthcare Insurance Life Sciences Manufacturing Retail Telecommunications Travel Our strategic partners Find a partner Become a partner - Partner Plus Partner Plus log in IBM TechXChange Community LinkedIn X Instagram YouTube Subscription Center Participate in user experience research Podcasts Contact IBM Privacy Terms of use Accessibility United States — English