IBM Cloud Pak® for Data Version 4.6 will reach end of support (EOS) on 31 July, 2025. For more information, see the Discontinuance of service announcement for IBM Cloud Pak for Data Version 4.X.
Upgrade to IBM Software Hub Version 5.1 before IBM Cloud Pak for Data Version 4.6 reaches end of support. For more information, see Upgrading IBM Software Hub in the IBM Software Hub Version 5.1 documentation.
Supported application languages and versions
Analytic Engine Powered by Apache Spark supports different languages like Python, R and Scala with Spark.
The following template IDs exist for the different languages and Spark versions.
* Indicates that Spark 3.2 is deprecated. You should start running your applications in Spark 3.3.
| Spark version/language | Template ID |
|---|---|
| Spark 3.3 /Python 3.10 and 3.9, R 4.2 and 3.6, Scala 2.12 | spark-3.3-jaas-v2-cp4d-template |
| Spark 3.2 /Python 3.9, R 3.6, Scala 2.12 * | spark-3.2-jaas-v2-cp4d-template |
The following examples show you sample payloads for submitting Spark job for different languages and Spark versions. Insert the appropriate template ID for the language and Spark version you need.
-
Payload for submitting a Spark job with Python 3.10:
{ "template_id": "<template_id>", "application_details": { "application": "<your application_file_path>", "arguments": ["<your_application_arguments>"], "conf": { "spark.app.name": "MyJob", "spark.eventLog.enabled": "true" }, "env": { "RUNTIME_PYTHON_ENV": "python310" } } } -
Payload for submitting a Spark Scala job:
{ "template_id": "<template_id>", "application_details": { "application": "/opt/ibm/spark/examples/jars/spark-examples*.jar", "arguments": ["1"], "class": "org.apache.spark.examples.SparkPi", "conf": { "spark.app.name": "MyJob", "spark.eventLog.enabled": "true", "spark.driver.memory": "4G", "spark.driver.cores": 1, "spark.executor.memory": "4G", "spark.executor.cores": 1, "ae.spark.executor.count": 1 }, "env": { "SAMPLE_ENV_KEY": "SAMPLE_VALUE" } } } -
Payload for submitting an R 4.2 Spark job:
{ "template_id": "<template_id>", "application_details": { "application": "/opt/ibm/spark/examples/src/main/r/dataframe.R", "class": "org.apache.spark.examples.SparkPi", "conf": { "spark.app.name": "MyJob", "spark.eventLog.enabled": "true", "spark.driver.memory": "4G", "spark.driver.cores": 1, "spark.executor.memory": "4G", "spark.executor.cores": 1, "ae.spark.executor.count": 1 }, "env": { "SAMPLE_ENV_KEY": "SAMPLE_VALUE" } } }
Parent topic: Submitting Spark jobs