-
Notifications
You must be signed in to change notification settings - Fork 218
Add taskName and input to retry context #1423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Mason <[email protected]>
Signed-off-by: Mason <[email protected]>
Object input = retryContext.getInput(); | ||
String taskName = retryContext.getTaskName(); | ||
|
||
if(taskName.equalsIgnoreCase(FailureActivity.class.getName())) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, so we can have multiple if blocks
if (taskName.equals..(ChildWorkf1.class)) {
// do something
}
if (taskName.equals..(ChildWorkf2.class)) {
// do something
}
if (taskName.equals..(ChildActivity1.class)) {
// do something
}
Wouldn't it be more maintainable to colocate the workflow or activity with its corresponding retry logic? While I understand the code duplication part, centralizing all retry handling in a single utility class does not seem optimal
@cicoyle , @artur-ciocanu , @salaboy I would like to know your thoughts too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So in the case of multiple activites, I would recommend breaking them up into separate retry handlers that call each other (See chain of responsibility design pattern). The main benefit of chaining handlers is that you can reuse logic from prior handlers to reduce code duplication. For example
class HandlerOne implements WorkflowTaskRetryHandler {
private WorkflowTaskRetryHandler nextHandler;
public boolean handle(WorkflowTaskRetryContextcontext) {
// handler logic here
if(nextHandler != null) {
return nextHandler.handle(context);
}
return true;
}
void setNextHandler(WorkflowTaskRetryHandler handler) {
this.nextHandler = handler;
}
}
In a workflow for instance, it would look like this (HandlerTwo follows same layout as HandlerOne above)
public class TestWorkflow implements Workflow {
@Override
public WorkflowStub create() {
return context -> {
Logger logger = context.getLogger();
logger.info("Starting RetryWorkflow: {}", context.getName());
HandlerOne handlerOne = new HandlerOne();
HandlerTwo handlerOne = new HandlerTwo();
handlerOne.setNextHandler(handlerTwo);
WorkflowTaskOptions taskOptions = new WorkflowTaskOptions(handlerOne);
logger.info("RetryWorkflow is calling Activity: {}", FailureActivity.class.getName());
Instant currentTime = context.getCurrentInstant();
Instant result = context.callActivity(FailureActivity.class.getName(), currentTime, taskOptions, Instant.class).await();
logger.info("RetryWorkflow finished with: {}", result);
context.complete(result);
};
}
}
Ideally, you would define an interface or abstract class that defines any common methods/logic (i.e. setNextHandler) when using this pattern, but for the sake of simplicity, I left it out in the example above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally, I'm curious about scenarios where retries are driven solely by the names of workflow/activities. Could you share an example where that pattern would be more effective?
Perhaps, you want to have variable backoff times depending on what each activity is doing. Additionally, if you know the name of the activity, you know the type of its input. Also, having the name is useful when it comes to tracking what task is being retried in the logs, especially if you are running multiple activities concurrently.
Another possible situation is: let's say you have Task A. This task makes a request to a service under the assumption that this service has access to other data. If this service doesn't have this data, then Task A fails. To recover from this failure, you may want to spin off a new activity, we'll call this Task B, to do some action that creates that data.
In this hypothetical situation, you would only want Task B to be called if Task A fails to prevent duplicating data. Once Task B completes, we could then return to our retry handler to retry Task A automatically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TheForbiddenAi I was revisiting this comment earlier this morning and had a couple of thoughts:
Regarding logs: if you include a logger statement inside each activity, it should help you identify which activity is currently running, right?
As for the second scenario, one approach could be to structure the workflow so that it starts with Activity A. If Activity A fails due to missing data, the workflow can then trigger Activity B. Once B completes successfully, you can start Activity A within the same workflow.
Also, the way I see retries, they are for transient failures but in your case Activity A fails with permanent failure
Let me know your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for logs, if you have multiple workflows running or multiple activities running asynchronously. It would be very difficult to differentiate logs in the retry handler from one another.
As for second scenario, yes that is a valid approach. However it would require a try catch and then manually calling the activity again, meaning if you had multiple of these scenarios for the same activity, you would have to chain try catches which would not be great from a readability and maintainability standpoint
Signed-off-by: Mason <[email protected]>
@TheForbiddenAi thank you for the insightful explanation. The PR looks good to me. Thank you |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1423 +/- ##
============================================
+ Coverage 76.91% 78.49% +1.57%
- Complexity 1592 1867 +275
============================================
Files 145 230 +85
Lines 4843 5794 +951
Branches 562 601 +39
============================================
+ Hits 3725 4548 +823
- Misses 821 926 +105
- Partials 297 320 +23 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Workflow failure is not related to these changes. They should pass on rerun |
@TheForbiddenAi @siri-varma I have some issues with this idea. I believe that the task name needs to be propagated to the runtime to keep track of the task name. Not having this at the runtime level, will make it impossible from the orchestrator (the dapr sidecar) to keep track of what is executed, as this is only being added on the SDK side. |
@salaboy I'm confused at what you mean by the task name having to be propagated to the runtime. This wouldn't change at. I don't see how this affects the knowledge of the orchestrator either. The orchestrator is what calls the retry handler in the first place. We just wrap the objects to not expose the durable task ones. This is just adding extra information, which we already have available to us when the context objects get instantiated at runtime, to that object. As far as inconsistencies go, I don't think other SDKs have added support for durabletask's retry handler. |
What I meant is that the task name is something more general than just the
retry mechanism, hence just adding the task name for the retry mechanism
feels wrong to me.
- Blog: http://salaboy.com <http://salaboy.wordpress.com>
- Github user: http://github.com/salaboy
- Twitter: http://twitter.com/salaboy
- Mauricio "Salaboy" Salatino -
…On Tue, 17 Jun 2025 at 18:24, Mason ***@***.***> wrote:
*TheForbiddenAi* left a comment (dapr/java-sdk#1423)
<#1423 (comment)>
@salaboy <https://github.com/salaboy> I'm confused at what you mean by
the task name having to be propagated to the runtime. This wouldn't change
at. I don't see how this affects the knowledge of the orchestrator either.
The orchestrator is what calls the retry handler in the first place. We
just wrap the objects to not expose the durable task ones. This is just
adding extra information, which we already have available to us when the
context objects get instantiated at runtime, to that object.
As far as inconsistencies go, I don't think other SDKs have added support
for durabletask's retry handler.
—
Reply to this email directly, view it on GitHub
<#1423 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACCMXQHBGH2RP3YKUOVKOT3D7NGBAVCNFSM6AAAAAB7IGYGQWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNZZGYZDANJRHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I still don't really see the problem. You could say the same thing about the orchestrator context that the original durable task retry context passes. I don't see the issue with providing the user more information when retrying to allow for more complex logic. However, if you truly think this is a blocker for this PR I will take out the task name from the context. |
Description
Add
taskName
andinput
toWorkflowTaskRetryContext
Changed
workflowContext
type inWorkflowTaskRetryContext
toWorkflowContext
Issue reference
We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation.
Please reference the issue this PR will close: #1422
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list: