-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Advanced Model Sampling #3
Comments
Looking at the reforge AMS builtin extension, it is based on ldm_patched (comfy) code which was completely stripped out in lllyasviel's forge2 update. The missing parts seem to be:
|
k_prediction.py seems to have has stuff pertaining to v-pred, edm, epsilon etc (and now ztsnr) but for some reason I can't get cosxl to work even if I try selecting it in your extension, weird I did confirm v-pred works though so that's nice |
@DenOfEquity any idea of it's possible to enable the vpred setting from this extension through txt2img API calls? |
I don't use API, so not sure. But maybe add to the payload: |
Hmm, noticed something odd with a scheduler on vpred models I thought it just didn't work on vpred models, but it seems like it might be realted to the ui itself. I wonder what could be responsible forge def beta_scheduler(n, sigma_min, sigma_max, inner_model, device):
# From "Beta Sampling is All You Need" [arXiv:2407.12173] (Lee et. al, 2024) """
alpha = shared.opts.beta_dist_alpha
beta = shared.opts.beta_dist_beta
timesteps = 1 - np.linspace(0, 1, n)
timesteps = [stats.beta.ppf(x, alpha, beta) for x in timesteps]
sigmas = [sigma_min + (x * (sigma_max-sigma_min)) for x in timesteps]
sigmas += [0.0]
return torch.FloatTensor(sigmas).to(device) comfy # Implemented based on: https://arxiv.org/abs/2407.12173
def beta_scheduler(model_sampling, steps, alpha=0.6, beta=0.6):
total_timesteps = (len(model_sampling.sigmas) - 1)
ts = 1 - numpy.linspace(0, 1, steps, endpoint=False)
ts = numpy.rint(scipy.stats.beta.ppf(ts, alpha, beta) * total_timesteps)
sigs = []
last_t = -1
for t in ts:
if t != last_t:
sigs += [float(model_sampling.sigmas[int(t)])]
last_t = t
sigs += [0.0]
return torch.FloatTensor(sigs) Edit: Might be related to inner_model not actually being used in it |
ok @DenOfEquity I tested these 4. Last one is the code from comfy, just replacing model_sampling with inner_model. Can you check them out sometime? You've already done a bunch of sampler/scheduler adapting to forge def beta_scheduler_v2(n, sigma_min, sigma_max, inner_model, device):
"""
Beta scheduler, based on "Beta Sampling is All You Need" [arXiv:2407.12173] (Lee et. al, 2024)
Correctly uses inner_model.inner_model.model.get_sigmas and total_timesteps for compatibility with SDXL.
"""
alpha = shared.opts.beta_dist_alpha
beta = shared.opts.beta_dist_beta
# Retrieve the sigmas from the inner model
sigmas = inner_model.get_sigmas(n)
# Generate beta-distributed timesteps
linspace = np.linspace(0, 1, n, endpoint=False)
beta_timesteps = stats.beta.ppf(linspace, alpha, beta)
# Map beta timesteps to integer indices and retrieve sigmas
beta_indices = np.rint(beta_timesteps * n).astype(int)
beta_indices = np.clip(beta_indices, 0, n - 1) # Ensure valid indices
result_sigmas = [float(sigmas[idx]) for idx in beta_indices]
# Append the final sigma (0.0)
result_sigmas += [0.0]
return torch.FloatTensor(result_sigmas).to(device)
def beta_scheduler_v2b(n, sigma_min, sigma_max, inner_model, device):
"""
Beta scheduler, based on "Beta Sampling is All You Need" [arXiv:2407.12173] (Lee et. al, 2024)
Correctly uses inner_model.inner_model.model.get_sigmas and total_timesteps for compatibility with SDXL.
"""
alpha = shared.opts.beta_dist_alpha
beta = shared.opts.beta_dist_beta
# Retrieve the sigmas from the inner model
sigmas = inner_model.get_sigmas(n+1)
# Total timesteps based on the length of the sigma schedule
total_timesteps = len(sigmas) - 1
# Generate beta-distributed timesteps
linspace = np.linspace(0, 1, n, endpoint=False)
beta_timesteps = stats.beta.ppf(linspace, alpha, beta) * total_timesteps
# Map beta timesteps to integer indices and retrieve sigmas
beta_indices = np.rint(beta_timesteps).astype(int)
beta_indices = np.clip(beta_indices, 0, total_timesteps - 1) # Ensure valid indices
result_sigmas = [float(sigmas[idx]) for idx in beta_indices]
# Append the final sigma (0.0)
result_sigmas += [0.0]
return torch.FloatTensor(result_sigmas).to(device)
def beta_scheduler_v2c(n, sigma_min, sigma_max, inner_model, device):
"""
Beta scheduler, based on "Beta Sampling is All You Need" [arXiv:2407.12173] (Lee et. al, 2024)
Correctly uses inner_model.inner_model.model.get_sigmas and total_timesteps for compatibility with SDXL.
"""
alpha = shared.opts.beta_dist_alpha
beta = shared.opts.beta_dist_beta
# Retrieve the sigmas from the inner model
sigmas = inner_model.get_sigmas(n + 1)
# Total timesteps based on the length of the sigma schedule
total_timesteps = len(sigmas) - 1
# Generate beta-distributed timesteps
linspace = np.linspace(0, 1, n, endpoint=False)
beta_timesteps = stats.beta.ppf(linspace, alpha, beta) * total_timesteps
# Map beta timesteps to integer indices and retrieve sigmas
beta_indices = np.rint(beta_timesteps).astype(int)
beta_indices = np.clip(beta_indices, 0, total_timesteps - 1) # Ensure valid indices
result_sigmas = []
last_t = -1
for t in beta_indices:
if t != last_t:
# Fetch sigmas using the get_sigmas function
result_sigmas += [float(sigmas[int(t)])]
last_t = t
# Append the final sigma (0.0)
result_sigmas += [0.0]
return torch.FloatTensor(result_sigmas).to(device)
def beta_scheduler_v2d(n, sigma_min, sigma_max, inner_model, device):
"""
Beta scheduler, based on "Beta Sampling is All You Need" [arXiv:2407.12173] (Lee et. al, 2024)
Correctly uses inner_model.inner_model.model.get_sigmas and total_timesteps for compatibility with SDXL.
"""
alpha = shared.opts.beta_dist_alpha
beta = shared.opts.beta_dist_beta
total_timesteps = (len(inner_model.sigmas) - 1)
ts = 1 - np.linspace(0, 1, n, endpoint=False)
ts = np.rint(stats.beta.ppf(ts, alpha, beta) * total_timesteps)
sigs = []
last_t = -1
for t in ts:
if t != last_t:
sigs += [float(inner_model.sigmas[int(t)])]
last_t = t
sigs += [0.0]
return torch.FloatTensor(sigs).to(device) |
I've tested 9th_Tail and noobaiXLNAIXL_vPredTestVersion: using Euler a with Beta scheduler, results were reasonable. Unless I used (very) low sampling steps.
Lowering |
From my understanding the comfy version (and the fact that inner_model was added to the forge function at one point) uses the model's sigmas in this scheduler, but I didn't read the paper so not sure. I only tested with two samplers and 28 steps, c and d looked good usually. All of them except c look similar, but the first two have fried look probably related to what you're saying. If we want to add an option to use sigma min and max it's probably not that hard what values did you try for sigma_max for the current Beta? |
Dropping |
Thank you for your hard work on mainline Forge and this extension.
With the new developments in local AI models, there is a shift towards Vpred, ZSNR-trained models.
Is it possible for this extension to adapt Advanced Model Sampling from the Reforge fork?
Many thanks for considering my request.
The text was updated successfully, but these errors were encountered: