We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi @tomasonjo , thank you very much for sharing this very informative material.
On this notebook, how could I change
llm = ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0)
to
from langchain.llms import HuggingFaceHub llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": TEMPERATURE, "max_length": MAX_TOKENS} )
or any other HuggingFacePipeline, and still make the tutorial work?
Of course, cypher_chain's llms would also have to be changed to other pipelines, but I have not got there yet.
cypher_chain
The error I get is:
File ~/Projects/blogs/openaifunction_constructing_graph.py:277, in extract_and_store_graph(document, nodes, rels) 274 275 extract_chain = get_extraction_chain(nodes, rels) --> 277 data = extract_chain.run(document.page_content) 278 File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/base.py:507, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 505 if len(args) != 1: 506 raise ValueError("`run` supports only one positional argument.") --> 507 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ 508 _output_key 509 ] 511 if kwargs and not args: 512 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ 513 _output_key 514 ] File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 310 except BaseException as e: 311 run_manager.on_chain_error(e) --> 312 raise e 313 run_manager.on_chain_end(outputs) 314 final_outputs: Dict[str, Any] = self.prep_outputs( 315 inputs, outputs, return_only_outputs 316 ) File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 299 run_manager = callback_manager.on_chain_start( 300 dumpd(self), 301 inputs, 302 name=run_name, 303 ) 304 try: 305 outputs = ( --> 306 self._call(inputs, run_manager=run_manager) 307 if new_arg_supported 308 else self._call(inputs) 309 ) 310 except BaseException as e: 311 run_manager.on_chain_error(e) File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/llm.py:104, in LLMChain._call(self, inputs, run_manager) 98 def _call( 99 self, 100 inputs: Dict[str, Any], 101 run_manager: Optional[CallbackManagerForChainRun] = None, 102 ) -> Dict[str, str]: 103 response = self.generate([inputs], run_manager=run_manager) --> 104 return self.create_outputs(response)[0] File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/llm.py:258, in LLMChain.create_outputs(self, llm_result) 256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]: 257 """Create outputs from response.""" --> 258 result = [ 259 # Get the text of the top generated string. 260 { 261 self.output_key: self.output_parser.parse_result(generation), 262 "full_generation": generation, 263 } 264 for generation in llm_result.generations 265 ] 266 if self.return_final_only: 267 result = [{self.output_key: r[self.output_key]} for r in result] File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/chains/llm.py:261, in <listcomp>(.0) 256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]: 257 """Create outputs from response.""" 258 result = [ 259 # Get the text of the top generated string. 260 { --> 261 self.output_key: self.output_parser.parse_result(generation), 262 "full_generation": generation, 263 } 264 for generation in llm_result.generations 265 ] 266 if self.return_final_only: 267 result = [{self.output_key: r[self.output_key]} for r in result] File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/output_parsers/openai_functions.py:174, in PydanticAttrOutputFunctionsParser.parse_result(self, result, partial) 173 def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any: --> 174 result = super().parse_result(result) 175 return getattr(result, self.attr_name) File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/output_parsers/openai_functions.py:157, in PydanticOutputFunctionsParser.parse_result(self, result, partial) 156 def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any: --> 157 _result = super().parse_result(result) 158 if self.args_only: 159 pydantic_args = self.pydantic_schema.parse_raw(_result) # type: ignore File ~/anaconda3/envs/master/lib/python3.8/site-packages/langchain/output_parsers/openai_functions.py:26, in OutputFunctionsParser.parse_result(self, result, partial) 24 generation = result[0] 25 if not isinstance(generation, ChatGeneration): ---> 26 raise OutputParserException( 27 "This output parser can only be used with a chat generation." 28 ) 29 message = generation.message 30 try: OutputParserException: This output parser can only be used with a chat generation.
The text was updated successfully, but these errors were encountered:
I got stuck in the same error. Seems that code is not optimized for other llms yet
Sorry, something went wrong.
It's not that code is not optimized for other LLMs, it's that most other LLMs are not optimized for Cypher generation
No branches or pull requests
Hi @tomasonjo , thank you very much for sharing this very informative material.
On this notebook, how could I change
to
or any other HuggingFacePipeline, and still make the tutorial work?
Of course,
cypher_chain
's llms would also have to be changed to other pipelines, but I have not got there yet.The error I get is:
The text was updated successfully, but these errors were encountered: