Some colleagues and I have been discussing rag frameworks lately with a bent toward those with strong support for custom development. The complexity of configuration frustrated one of them so he went in search of tools that had a simpler installation and configuration process. Ultimately that lead him to discover ragflow.
I haven’t tried it yet, plan to do so on one of the homelab GPU servers this evening. From what I’ve read, local LLM support is good which means I will be using Ollama/Llama 3.2. Will report back on how it goes.