Question & Answer
Question
Following error occurred during working with large amount of data on spark session in Jupyter Notebook.
org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 1 tasks (1477.3 MiB) is bigger than spark.driver.maxResultSize (1024.0 MiB)
Could you tell me how to avoid this issue?
[{"Type":"MASTER","Line of Business":{"code":"LOB76","label":"Data Platform"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSHUT6","label":"IBM Watson Studio Premium Cartridge for IBM Cloud Pak for Data"},"ARM Category":[{"code":"a8m3p000000hBziAAE","label":"Analytics Engine"}],"ARM Case Number":"TS011884551","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]
Log InLog in to view more of this document
This document has the abstract of a technical article that is available to authorized users once you have logged on. Please use Log in button above to access the full document. After log in, if you do not have the right authorization for this document, there will be instructions on what to do next.
Was this topic helpful?
Document Information
Modified date:
31 January 2023
UID
ibm16856647