Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM Node in a Chatflow Didn't Work Normally #12944

Open
5 tasks done
A-I-R opened this issue Jan 22, 2025 · 1 comment
Open
5 tasks done

LLM Node in a Chatflow Didn't Work Normally #12944

A-I-R opened this issue Jan 22, 2025 · 1 comment
Labels
🐞 bug Something isn't working

Comments

@A-I-R
Copy link

A-I-R commented Jan 22, 2025

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

v0.15.1

Cloud or Self Hosted

Self Hosted (Source)

Steps to reproduce

I created a chatflow via Dify, but the LLM node doesn't work properly. To be specific, I encountered two issues:

  1. The memory function is not working properly. If I turn the memory function on, there will be an error occur at the second turn of chat, raising an error about "result_text" variable. If the memory function is off, everything is fine. I pasted the error I got from Dify API service below.

Image

  1. Some of the LLM node didn't fetch response from LLM. In the chatflow there was one LLM node originally (which is the one mentioned in the first issue) and it worked well except the memory function. And when I added another LLM node using the same LLM provider, it didn't properly get response. Sometimes you will get part of the stream data returned from LLM followed by an error, sometimes nothing returned with an error raised directly. And none of the error detail was given. Even if I set the new node with the same prompt, parameters, input data as the first node, it still didn't work. I tried to delete the model provider and add it again but nothing changed. I checked the vLLM logs (the model is a self-hosted OpenAI compatible service based on vLLM) but didn't find anything strange. I pasted the error showed in the web frontend and vLLM logs below. I was asking the LLM to extract the keyword from my query about weather.

Image

Image

✔️ Expected Behavior

Memory function perform normally, and all the LLM sharing the same model provider work normally.

❌ Actual Behavior

Memory function is not working fine from second turn of a chat, and only one specific LLM node works fine.

@dosubot dosubot bot added the 🐞 bug Something isn't working label Jan 22, 2025
Copy link

dosubot bot commented Jan 22, 2025

Hey @A-I-R! I'm here to help you with any bugs, questions, or contributions. Let's get this sorted out together!

I found a similar unsolved discussion that might be relevant to your issue: How do I get the memory feature in my workflow?, which was last updated on September 12, 2024 [1].

Regarding the issues you're facing:

  1. Memory Function Error: The error related to the "result_text" variable suggests that it might not be assigned a value before being accessed. This could be a bug in the workflow script where the memory function is not properly handling the variable assignment. You might want to check the specific lines in graph_engine.py, node.py in the base directory, and node.py in the llm directory where the error is occurring.

  2. LLM Node Response Issue: When multiple LLM nodes use the same model provider, there might be conflicts or issues with how the nodes are configured or how they interact with the model provider. Since the error message is "[openai_api_compatible] Error" and no detailed error is provided, it could be related to the configuration of the self-hosted OpenAI compatible service based on vLLM. Double-check the configuration settings and ensure that the model provider is correctly set up to handle multiple requests.

For both issues, reviewing the error handling strategies in Dify might help manage these exceptions more effectively. You can configure error handling to retry on failure or execute a fail branch to continue the workflow despite errors [2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant