SQLQuery default print error instead of throwing error stack#581
SQLQuery default print error instead of throwing error stack#581jackylee-ch wants to merge 6 commits intojupyter-incubator:masterfrom
Conversation
|
cc @itamarst |
|
Hello @stczwd I am studying the issue on throwing errors instead of printing them on the spark kernel (because I really like/use this project :) ), do you think it could be better if this https://github.com/jupyter-incubator/sparkmagic/blob/master/sparkmagic/sparkmagic/livyclientlib/sqlquery.py#L48 could return something like this https://github.com/jupyter-incubator/sparkmagic/blob/master/sparkmagic/sparkmagic/magics/sparkmagicsbase.py#L44 and the whole idea of if not success:
if conf.spark_statement_errors_are_fatal():
if conf.shutdown_session_on_spark_statement_errors():
self.spark_controller.cleanup()
raise SparkStatementException(out)could be extracted into another function instead of having the same code duplicated in different files. This whole logic also could be extracted into another function in a different file and become easily reusable, but my whole point is to question where this error throwing should be centralized, I think it should be centralized on the |
|
@PedroRossi Yeap, this is a good suggestion. I will check it out. |
|
@PedroRossi This is a great idea, but it need a lot of changes. I will submit this in a new patch, which we can discuss in it. |
|
Could you describe the status of this PR now? The goal again, and what it tries to do? |
|
@itamarst This patch is mainly used to control throwing exception in sqlqueries. There will be another patch to handle the problem @PedroRossi talked with me. |
devstein
left a comment
There was a problem hiding this comment.
Please add tests for this
| result = records_to_dataframe(records_text, session.kind, self._coerce) | ||
| if conf.spark_statement_errors_are_fatal(): | ||
| raise SparkStatementException(records_text) | ||
| self.ipython_display.send_error(records_text) |
There was a problem hiding this comment.
If conf.spark_statement_errors_are_fatal() == False and a SparkStatementException is raised then wouldn't this be handled and displayed by handle_expected_exceptions?
The default result of the Spark failure is the error message, no exception throwing until user sets spark_statement_errors_are_fatal. The result of running SQL needs this, too. For most users, the error message is sufficient.