Skip to content

Add taskName and input to retry context #1423

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

TheForbiddenAi
Copy link
Contributor

Description

Add taskName and input to WorkflowTaskRetryContext
Changed workflowContext type in WorkflowTaskRetryContext to WorkflowContext

Issue reference

We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation.

Please reference the issue this PR will close: #1422

Checklist

Please make sure you've completed the relevant tasks for this PR, out of the following list:

  • Code compiles correctly
  • Created/updated tests
  • Extended the documentation

@TheForbiddenAi TheForbiddenAi requested review from a team as code owners June 13, 2025 13:57
@TheForbiddenAi TheForbiddenAi changed the title Add taskName and input to retry handler Add taskName and input to retry context Jun 13, 2025
Object input = retryContext.getInput();
String taskName = retryContext.getTaskName();

if(taskName.equalsIgnoreCase(FailureActivity.class.getName())) {
Copy link
Contributor

@siri-varma siri-varma Jun 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, so we can have multiple if blocks

if (taskName.equals..(ChildWorkf1.class)) {
// do something
}

if (taskName.equals..(ChildWorkf2.class)) {
// do something
}

if (taskName.equals..(ChildActivity1.class)) {
// do something
}

Wouldn't it be more maintainable to colocate the workflow or activity with its corresponding retry logic? While I understand the code duplication part, centralizing all retry handling in a single utility class does not seem optimal

@cicoyle , @artur-ciocanu , @salaboy I would like to know your thoughts too.

Copy link
Contributor Author

@TheForbiddenAi TheForbiddenAi Jun 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in the case of multiple activites, I would recommend breaking them up into separate retry handlers that call each other (See chain of responsibility design pattern). The main benefit of chaining handlers is that you can reuse logic from prior handlers to reduce code duplication. For example

class HandlerOne implements WorkflowTaskRetryHandler {

  private WorkflowTaskRetryHandler nextHandler;
  
  public boolean handle(WorkflowTaskRetryContextcontext) {
    // handler logic here
    if(nextHandler != null) {
      return nextHandler.handle(context);
    }
    return true;
  }

  void setNextHandler(WorkflowTaskRetryHandler handler) {
     this.nextHandler = handler;
  }
}

In a workflow for instance, it would look like this (HandlerTwo follows same layout as HandlerOne above)

public class TestWorkflow implements Workflow {

  @Override
  public WorkflowStub create() {
    return context -> {
      Logger logger = context.getLogger();
      logger.info("Starting RetryWorkflow: {}", context.getName());

      HandlerOne handlerOne = new HandlerOne();
      HandlerTwo handlerOne = new HandlerTwo();

      handlerOne.setNextHandler(handlerTwo);
      WorkflowTaskOptions taskOptions = new WorkflowTaskOptions(handlerOne);

      logger.info("RetryWorkflow is calling Activity: {}", FailureActivity.class.getName());
      Instant currentTime = context.getCurrentInstant();
      Instant result = context.callActivity(FailureActivity.class.getName(), currentTime, taskOptions, Instant.class).await();

      logger.info("RetryWorkflow finished with: {}",  result);
      context.complete(result);
    };
  }

}

Ideally, you would define an interface or abstract class that defines any common methods/logic (i.e. setNextHandler) when using this pattern, but for the sake of simplicity, I left it out in the example above

Copy link
Contributor Author

@TheForbiddenAi TheForbiddenAi Jun 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, I'm curious about scenarios where retries are driven solely by the names of workflow/activities. Could you share an example where that pattern would be more effective?

Perhaps, you want to have variable backoff times depending on what each activity is doing. Additionally, if you know the name of the activity, you know the type of its input. Also, having the name is useful when it comes to tracking what task is being retried in the logs, especially if you are running multiple activities concurrently.

Another possible situation is: let's say you have Task A. This task makes a request to a service under the assumption that this service has access to other data. If this service doesn't have this data, then Task A fails. To recover from this failure, you may want to spin off a new activity, we'll call this Task B, to do some action that creates that data.

In this hypothetical situation, you would only want Task B to be called if Task A fails to prevent duplicating data. Once Task B completes, we could then return to our retry handler to retry Task A automatically.

Copy link
Contributor

@siri-varma siri-varma Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TheForbiddenAi I was revisiting this comment earlier this morning and had a couple of thoughts:

Regarding logs: if you include a logger statement inside each activity, it should help you identify which activity is currently running, right?

As for the second scenario, one approach could be to structure the workflow so that it starts with Activity A. If Activity A fails due to missing data, the workflow can then trigger Activity B. Once B completes successfully, you can start Activity A within the same workflow.

Also, the way I see retries, they are for transient failures but in your case Activity A fails with permanent failure

Let me know your thoughts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for logs, if you have multiple workflows running or multiple activities running asynchronously. It would be very difficult to differentiate logs in the retry handler from one another.

As for second scenario, yes that is a valid approach. However it would require a try catch and then manually calling the activity again, meaning if you had multiple of these scenarios for the same activity, you would have to chain try catches which would not be great from a readability and maintainability standpoint

@siri-varma
Copy link
Contributor

@TheForbiddenAi thank you for the insightful explanation. The PR looks good to me. Thank you

Copy link
Contributor

@siri-varma siri-varma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link

codecov bot commented Jun 16, 2025

Codecov Report

Attention: Patch coverage is 77.77778% with 2 lines in your changes missing coverage. Please review.

Project coverage is 78.49%. Comparing base (d759c53) to head (30a41cd).
Report is 172 commits behind head on master.

Files with missing lines Patch % Lines
...va/io/dapr/workflows/WorkflowTaskRetryContext.java 60.00% 2 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##             master    #1423      +/-   ##
============================================
+ Coverage     76.91%   78.49%   +1.57%     
- Complexity     1592     1867     +275     
============================================
  Files           145      230      +85     
  Lines          4843     5794     +951     
  Branches        562      601      +39     
============================================
+ Hits           3725     4548     +823     
- Misses          821      926     +105     
- Partials        297      320      +23     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@TheForbiddenAi
Copy link
Contributor Author

Workflow failure is not related to these changes. They should pass on rerun

@salaboy
Copy link
Collaborator

salaboy commented Jun 17, 2025

@TheForbiddenAi @siri-varma I have some issues with this idea. I believe that the task name needs to be propagated to the runtime to keep track of the task name. Not having this at the runtime level, will make it impossible from the orchestrator (the dapr sidecar) to keep track of what is executed, as this is only being added on the SDK side.
This also brings inconsistency with other SDKs.

@TheForbiddenAi
Copy link
Contributor Author

@salaboy I'm confused at what you mean by the task name having to be propagated to the runtime. This wouldn't change at. I don't see how this affects the knowledge of the orchestrator either. The orchestrator is what calls the retry handler in the first place. We just wrap the objects to not expose the durable task ones. This is just adding extra information, which we already have available to us when the context objects get instantiated at runtime, to that object.

As far as inconsistencies go, I don't think other SDKs have added support for durabletask's retry handler.

@salaboy
Copy link
Collaborator

salaboy commented Jun 18, 2025 via email

@TheForbiddenAi
Copy link
Contributor Author

I still don't really see the problem. You could say the same thing about the orchestrator context that the original durable task retry context passes. I don't see the issue with providing the user more information when retrying to allow for more complex logic. However, if you truly think this is a blocker for this PR I will take out the task name from the context.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add more details to WorkflowTaskRetryHandler
3 participants