Skip to content

Conversation

@kabachuha
Copy link

@kabachuha kabachuha commented Oct 19, 2025

This Pull Request implements "TAG: Tangential Amplifying Guidance for Hallucination-Resistant Diffusion Sampling" from https://huggingface.co/papers/2510.04533.

The code is organized in the same manner as the vanilla TCFG node with the similar, but different concept.

Tangential Amplifying Guidance (TAG) improves diffusion model sample quality by directly amplifying tangential components of estimated scores without modifying the model architecture.

Source code: https://github.com/hyeon-cho/Tangential-Amplifying-Guidance. Project page: https://hyeon-cho.github.io/TAG/

Examples (Flux):

TAG-Comparis

Closes #10323.

@chaObserv
Copy link
Contributor

chaObserv commented Oct 19, 2025

This doesn’t seem like TAG. TAG splits the solver’s updates (DDIM, DPM, etc.) into normal and tangential components. Since it runs after a sampling step, the post-CFG level probably cannot handle this.

@kabachuha
Copy link
Author

kabachuha commented Oct 19, 2025

Well, I used the code straight from the repo: https://github.com/hyeon-cho/Tangential-Amplifying-Guidance/blob/74005bdd265c0a6d85099ab234b22b437ff6c774/pipelines/pipeline_tag_stablediffusion3.py#L375-L396

                latents_dtype = latents.dtype
                output = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]
                if t <= sta_tpd and t >= end_tpd:
                    post_latents = latents 
                    v_t_2d       = post_latents / (post_latents.norm(p=2, dim=(1,2,3), keepdim=True) + 1e-8)

                    latents = output 

                    delta_latents = latents - post_latents
                    delta_unit    = (delta_latents * v_t_2d).sum(dim=(1,2,3), keepdim=True)

                    normal_update_vector     = delta_unit * v_t_2d
                    tangential_update_vector = delta_latents - normal_update_vector

                    eta_v = t_guidance_scale
                    eta_n = r_guidance_scale

                    latents = post_latents + \
                        eta_v * tangential_update_vector + \
                        eta_n * normal_update_vector
                else: # [NOTE] Simple Path (equal to original)
                    latents = output

I don't see it requires extra steps.


"sampler_post_cfg_function" is the end function of all samplers, basically just after the sampler's step technically

ComfyUI/comfy/samplers.py

Lines 353 to 390 in b4f30bd

def cfg_function(model, cond_pred, uncond_pred, cond_scale, x, timestep, model_options={}, cond=None, uncond=None):
if "sampler_cfg_function" in model_options:
args = {"cond": x - cond_pred, "uncond": x - uncond_pred, "cond_scale": cond_scale, "timestep": timestep, "input": x, "sigma": timestep,
"cond_denoised": cond_pred, "uncond_denoised": uncond_pred, "model": model, "model_options": model_options, "input_cond": cond, "input_uncond": uncond}
cfg_result = x - model_options["sampler_cfg_function"](args)
else:
cfg_result = uncond_pred + (cond_pred - uncond_pred) * cond_scale
for fn in model_options.get("sampler_post_cfg_function", []):
args = {"denoised": cfg_result, "cond": cond, "uncond": uncond, "cond_scale": cond_scale, "model": model, "uncond_denoised": uncond_pred, "cond_denoised": cond_pred,
"sigma": timestep, "model_options": model_options, "input": x}
cfg_result = fn(args)
return cfg_result
#The main sampling function shared by all the samplers
#Returns denoised
def sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options={}, seed=None):
if math.isclose(cond_scale, 1.0) and model_options.get("disable_cfg1_optimization", False) == False:
uncond_ = None
else:
uncond_ = uncond
conds = [cond, uncond_]
if "sampler_calc_cond_batch_function" in model_options:
args = {"conds": conds, "input": x, "sigma": timestep, "model": model, "model_options": model_options}
out = model_options["sampler_calc_cond_batch_function"](args)
else:
out = calc_cond_batch(model, conds, x, timestep, model_options)
for fn in model_options.get("sampler_pre_cfg_function", []):
args = {"conds":conds, "conds_out": out, "cond_scale": cond_scale, "timestep": timestep,
"input": x, "sigma": timestep, "model": model, "model_options": model_options}
out = fn(args)
return cfg_function(model, out[0], out[1], cond_scale, x, timestep, model_options=model_options, cond=cond, uncond=uncond_)

@chaObserv
Copy link
Contributor

output = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]

In diffusers, a scheduler's step function performs a single sampling step. For example, the step function in EulerDiscreteScheduler executes one Euler update.

Simplified Euler in comfy (k-diffusion) as example:

denoised = model(x, sigmas[i] * s_in, **extra_args)  # (post-CFG already applied)
d = to_d(x, sigmas[i], denoised)
dt = sigmas[i + 1] - sigmas[i]
x_next = x + d * dt

# TAG
sampler_increment = x_next - x

The post-CFG functions occur during the call to model(), so the resulting denoised is returned to the sampler function eventually. TAG is designed to amplify the tangential components of the update increment. However, the current implementation likely does not do this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Tangential amplifying guidance

2 participants