How to Integrate a RAG System or Use AlwaysReddy for a Specific Use Case? #101
-
Hi everyone, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
Unfortunately, AlwaysReddy doesn't support RAG at all. Major changes to the codebase is needed to make it work. It's been a topic of discussion a few times on the Discord, but so far, no one has really attempted to make it work. Personally, from doing research on this, I think the best solution would be majorly rewriting AlwaysReddy to be built on LangChain or LlamaIndex instead, as that would make it trivial to implement RAG, tool calling, etc. So besides rewriting AlwaysReddy, if you're looking for a way to talk voice-to-voice with an LLM using RAG and tooling, and don't specifically need the global hotkeys that AlwaysReddy provides, then look into Open WebUI, as it supports those things - even a hands-free voice-to-voice call mode. |
Beta Was this translation helpful? Give feedback.
-
ok sure thanks |
Beta Was this translation helpful? Give feedback.
-
can you tell where the input and response of user and llm is stored or in which variable, i want to pass this variable to another file so i can store this for a purpose |
Beta Was this translation helpful? Give feedback.
Unfortunately, AlwaysReddy doesn't support RAG at all. Major changes to the codebase is needed to make it work.
It's been a topic of discussion a few times on the Discord, but so far, no one has really attempted to make it work. Personally, from doing research on this, I think the best solution would be majorly rewriting AlwaysReddy to be built on LangChain or LlamaIndex instead, as that would make it trivial to implement RAG, tool calling, etc.
So besides rewriting AlwaysReddy, if you're looking for a way to talk voice-to-voice with an LLM using RAG and tooling, and don't specifically need the global hotkeys that AlwaysReddy provides, then look into Open WebUI, as it supports those thing…