Skip to content

Commit 3e40b38

Browse files
committed
[SPARK-42357][CORE] Log exitCode when SparkContext.stop starts
### What changes were proposed in this pull request? This PR aims to log `exitCode` when `SparkContext.stop` starts as a clear boundary to ignore the meaningless log messages from user jobs. ### Why are the changes needed? This PR adds the following log. ``` 23/02/06 02:12:55 INFO SparkContext: SparkContext is stopping with exitCode 0. ``` In the simplest case, it stops like the following. ``` $ bin/spark-submit examples/src/main/python/pi.py ... Pi is roughly 3.147080 23/02/06 02:12:55 INFO SparkContext: SparkContext is stopping with exitCode 0. 23/02/06 02:12:55 INFO AbstractConnector: Stopped Spark1cb72b8{HTTP/1.1, (http/1.1)}{localhost:4040} 23/02/06 02:12:55 INFO SparkUI: Stopped Spark web UI at http://localhost:4040 23/02/06 02:12:55 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 23/02/06 02:12:55 INFO MemoryStore: MemoryStore cleared 23/02/06 02:12:55 INFO BlockManager: BlockManager stopped 23/02/06 02:12:55 INFO BlockManagerMaster: BlockManagerMaster stopped 23/02/06 02:12:55 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 23/02/06 02:12:55 INFO SparkContext: Successfully stopped SparkContext 23/02/06 02:12:56 INFO ShutdownHookManager: Shutdown hook called ``` However, in the complex case, there are many many logs after invoking `SparkContet.stop(0)`. Sometimes, this makes users confused. New log will show a clear boundary to ignore the meaningless messages. ``` 23/02/06 02:59:27 INFO TaskSetManager: Starting task 283.0 in stage 34.0 (TID 426) (172.31.218.234, executor 5, partition 283, PROCESS_LOCAL, 8001 bytes) ... 23/02/06 02:59:27 INFO BlockManagerInfo: Removed broadcast_35_piece0 on 172.31.218.244:41741 in memory (size: 5.7 KiB, free: 50.8 GiB) ... 23/02/06 02:59:27 INFO SparkUI: Stopped Spark web UI at http://r6i-16xlarge-3402-0203-apple-3-bf3f7e8624a90a37-driver-svc.default.svc:4040 ... 23/02/06 02:59:27 INFO DAGScheduler: ShuffleMapStage 34 (q24a) failed in 0.103 s due to Stage cancelled because SparkContext was shut down ... 23/02/06 02:59:27 INFO KubernetesClusterSchedulerBackend: Shutting down all executors ... 23/02/06 02:59:27 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down ... 23/02/06 02:59:27 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. org.apache.spark.SparkException: Could not find CoarseGrainedScheduler. ``` ### Does this PR introduce _any_ user-facing change? No, this is a log-only change. ### How was this patch tested? Manually. Closes #39900 from dongjoon-hyun/SPARK-42357. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent 97a20ed commit 3e40b38

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

core/src/main/scala/org/apache/spark/SparkContext.scala

+1
Original file line numberDiff line numberDiff line change
@@ -2092,6 +2092,7 @@ class SparkContext(config: SparkConf) extends Logging {
20922092
* @param exitCode Specified exit code that will passed to scheduler backend in client mode.
20932093
*/
20942094
def stop(exitCode: Int): Unit = {
2095+
logInfo(s"SparkContext is stopping with exitCode $exitCode.")
20952096
if (LiveListenerBus.withinListenerThread.value) {
20962097
throw new SparkException(s"Cannot stop SparkContext within listener bus thread.")
20972098
}

0 commit comments

Comments
 (0)