-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interrupt is not actually getting interrupted for the second time. #2935
Comments
Solved it have a look at this PR its a single line change |
I think the framework is actually working as designed. The interrupt() function is meant to cache resume values within a task to handle multiple sequential interrupts in a node. When you call:
The framework:
If you need to force a new user input each time, you have two options: Option 1 - Clear your previous response:
Option 2 - Make each request unique:
Either approach will ensure you get fresh user input each time the node runs. |
@rubailly I tried both of them, but none of them waited for the second time, do I need to do anything in the resume ? return Command(goto='lookup_node', resume = {'userAgentInteractionInfo': {'agentRequest': agent_request, 'userResponse': user_response}}, update = {'messages': [HumanMessage(content=user_response, name = "User_Response")]}) By the way thanks for helping me out. |
@vigneshmj1997 I saw you PR but instead of we changing the logic, there must be something within the langgraph to solve this issue |
@Saisiva123 i know it was just a quick solution, I wanted code to be in production so i did it, i though the situation was same for you |
@vigneshmj1997 I completely agree with you, the langgraph team has to look into this because this is something that is very important. Please feel free to share anything that you find useful. |
Thanks for reporting, we will investigate this issue |
Does anyone on this thread have a concise reproducible example of Re: @vigneshmj1997's request for an option to not cache: then the only way for your graph to know not to interrupt would be to resume AND also update the state, which seems more complicated to me. |
@vigneshmj1997 or @Saisiva123 do you have an example of a reproducible example that illustrates the issue? |
I am also experiencing this issue myself, however, when I stripped it down to a minimal version in a notebook to reproduce it and create a version that I would be able to share here, the interrupts worked as expected and there was no issue. It's extremely puzzling. |
@underclocked any chance you're using subgraphs in the full example where you're having an issue? |
The scenario is this: I have a multi agent system,
At this point the previous responses that I've provided to agent are again being consider for this new question that's asked by the agent. Instead it should wait for the my new input. So now instead it considering the previous provided value I need the agent to actually interrupt for the second time as well. |
Yes, I'm using subgraphs. In my full example the flow is something like: def continue_node(state):
continue_value = interrupt("Do you want to continue (y/n)?")
if continue_value == 'y':
state["subgraph"]["continue"] = True
else:
state["subgraph"]["continue"] = False
return state
def feedback_node(state):
feedback_value = interrupt("Provide feedback")
state["subgraph]["feedback"] = feedback_value
# do stuff based on feedback
return state
...
def continue_router(state) -> Literal["feedback_node", "finalize"]:
if state["subgraph"]["continue"]:
return "feedback_node"
else:
return "finalize"
subgraph_builder = StateGraph()
# other nodes
subgraph_builder.add_node("continue_node", continue_node)
subgraph_builder.add_node("feedback_node", feedback_node)
subgraph_builder.add_conditional_edges("continue_node", continue_router)
# other nodes
subgraph = subgraph_builder.compile()
parent_builder = StateGraph()
#other nodes
parent_builder.add_node("subgraph", subgraph)
#other nodes and edges
config = {"configurable": {"thread_id": some_string}}
checkpointer = MemorySaver()
parent_graph = parent_builder.compile(checkpointer=checkpointer) The graph is initially run like: for s in parent_graph.stream(state, stream_mode="values", config=config, subgraphs=True):
print(s) when restarting, the graph is restarted with resume_message = "y"
for s in parent_graph.stream(Command(resume=resume_message), config=config, subgraphs=True):
print(s) It successfully breaks in the first node, but then when "y" is received it does not break in |
@Saisiva123 it's not clear from your description what the problem is -- i don't know if the issue in user code (misuse) or a bug in the framework. It would be really helper to read through this: https://stackoverflow.com/help/minimal-reproducible-example to understand how to create a minimal reproducible example that maintainers can use to diangose the issue. The example should be copy pasteable ideally without any modifications (i.e., also include all the necessary imports etc.) |
Here is a minimum example that exhibits the issue and does not interrupt. from langgraph.graph import Graph, StateGraph, START, END
from langchain_core.messages import HumanMessage, AIMessage
from typing import Literal, List, Dict, Any, Union
import uuid
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt, Command
# from IPython.display import Image, display
# define state (shared between subgraph and parent graph)
class ParentState(TypedDict):
parent_continue_decision: bool
parent_iteration: int
parent_done: bool
class SubgraphState(TypedDict):
subgraph_continue_decision: bool
subgraph_iteration: int
subgraph_done: bool
class AgentState(TypedDict):
messages: list[any]
subgraph_state: SubgraphState
parent_state: ParentState
# define subgraph nodes
def subgraph_init_node(state: AgentState) -> AgentState:
print("subgraph_init_node")
state["subgraph_state"]["subgraph_iteration"] = 0
state["subgraph_state"]["subgraph_done"] = False
return state
def subgraph_agent_node(state: AgentState) -> AgentState:
print("subgraph_agent_node")
state["messages"].append(AIMessage(content=f"Iteration {state['subgraph_state']['subgraph_iteration']}"))
state["subgraph_state"]["subgraph_iteration"] = state["subgraph_state"]["subgraph_iteration"] + 1
return state
def subgraph_continue_node_1(state: AgentState) -> AgentState:
print("subgraph_continue_node_1")
value1 = interrupt("Continue 1? (y/n)")
if value1 != "n":
state["subgraph_state"]["subgraph_continue_decision"] = True
else:
state["subgraph_state"]["subgraph_continue_decision"] = False
return state
def subgraph_continue_node_2(state: AgentState) -> AgentState:
print("subgraph_continue_node_2")
value2 = interrupt("Continue 2? (y/n)")
if value2 != "n":
state["subgraph_state"]["subgraph_continue_decision"] = True
else:
state["subgraph_state"]["subgraph_continue_decision"] = False
return state
def subgraph_finalize_node(state: AgentState) -> AgentState:
print("subgraph_finalize_node")
state["subgraph_state"]["subgraph_done"] = True
return state
# define parent graph nodes
def parent_init_node(state: AgentState) -> AgentState:
print("parent_init_node")
state["parent_state"]["parent_iteration"] = 0
state["parent_state"]["parent_done"] = False
return state
def parent_agent_node(state: AgentState) -> AgentState:
print("parent_agent_node")
state["messages"].append(AIMessage(content=f"Iteration {state['parent_state']['parent_iteration']}"))
state["parent_state"]["parent_iteration"] = state["parent_state"]["parent_iteration"] + 1
return state
def parent_continue_node(state: AgentState) -> AgentState:
print("parent_continue_node")
value = interrupt("Continue? (y/n)")
if value != "n":
state["parent_state"]["parent_continue_decision"] = True
else:
state["parent_state"]["parent_continue_decision"] = False
return state
def parent_finalize_node(state: AgentState) -> AgentState:
print("parent_finalize_node")
state["parent_state"]["parent_done"] = True
return state
def parent_router(state: AgentState) -> Literal["parent_agent", "parent_finalize"]:
if state["parent_state"]["parent_continue_decision"]:
return "parent_agent"
else:
return "parent_finalize"
def subgraph_router(state: AgentState) -> Literal["subgraph_agent", "subgraph_finalize"]:
if state["subgraph_state"]["subgraph_continue_decision"]:
return "subgraph_agent"
else:
return "subgraph_finalize"
# define graphs
#subgraph
subgraph_builder = StateGraph(AgentState)
subgraph_builder.add_node("subgraph_init", subgraph_init_node)
subgraph_builder.add_node("subgraph_agent", subgraph_agent_node)
subgraph_builder.add_node("subgraph_continue_1", subgraph_continue_node_1)
subgraph_builder.add_node("subgraph_continue_2", subgraph_continue_node_2)
subgraph_builder.add_node("subgraph_finalize", subgraph_finalize_node)
subgraph_builder.add_edge(START, "subgraph_init")
subgraph_builder.add_edge("subgraph_init", "subgraph_agent")
subgraph_builder.add_edge("subgraph_agent", "subgraph_continue_1")
subgraph_builder.add_edge("subgraph_continue_1", "subgraph_continue_2")
subgraph_builder.add_conditional_edges("subgraph_continue_2", subgraph_router)
subgraph_builder.add_edge("subgraph_finalize", END)
subgraph= subgraph_builder.compile()
#parent graph
parent_builder = StateGraph(AgentState)
parent_builder.add_node("parent_init", parent_init_node)
parent_builder.add_node("parent_agent", subgraph)
parent_builder.add_node("parent_continue", parent_continue_node)
parent_builder.add_node("parent_finalize", parent_finalize_node)
parent_builder.add_edge(START, "parent_init")
parent_builder.add_edge("parent_init", "parent_agent")
parent_builder.add_edge("parent_agent", "parent_continue")
parent_builder.add_conditional_edges("parent_continue", parent_router)
parent_builder.add_edge("parent_finalize", END)
checkpointer = MemorySaver()
parent_graph = parent_builder.compile(checkpointer=checkpointer)
# display(Image(parent_graph.get_graph(xray=1).draw_mermaid_png()))
state = AgentState()
state = {
"messages": [HumanMessage(content="Hello")],
"subgraph_state": {
"subgraph_continue_decision": True,
"subgraph_iteration": 0,
"subgraph_done": False
},
"parent_state": {
"parent_continue_decision": True,
"parent_iteration": 0,
"parent_done": False
}
}
thread_id = str(uuid.uuid4())
config = {"configurable": {"thread_id": thread_id}}
print(parent_graph.get_state(config=config))
for s in parent_graph.stream(state, config=config, stream_mode="values", subgraphs=True):
print(s)
state = s
for s in parent_graph.stream(Command(resume='y'), config=config, subgraphs=True):
print(s)
state = s
print(parent_graph.get_state(subgraphs=True, config=config))
for s in parent_graph.stream(Command(resume='n'), config=config, subgraphs=True):
print(s)
print(parent_graph.get_state(subgraphs=True, config=config)) Here's the pyproject.toml
and the uv.lock
|
Closing this issue in favor of #3072. Please follow the discussion there |
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
There should be some sort of force interrupt that waits for the user input second time.
System Info
python -m langchain_core.sys_info
The text was updated successfully, but these errors were encountered: