Rag Solux
RagSolux
RagSolux is a desktop app that ingests CSVs and other structured files, builds a vector index, and lets you query the content through an LLM chat interface. It’s designed for developers and analysts who want repeatable, configurable retrieval-augmented Q&A over tabular data. The workflow keeps processing transparent with logs, progress controls, and tunable retrieval/generation settings.
Key features
- Multi-format ingestion — Import common tabular and document formats for indexing and chat-based querying.
- Embedding and indexing — Splits data into chunks/rows, generates embeddings, and stores them in a vector index.
- LLM and embedding selection — Configure which chat model and embedding model the app uses.
- Chat over your data — Ask natural-language questions and get answers grounded in retrieved segments.
- Retrieval tuning — Adjust diversity (MMR-style search) and other knobs to balance coverage vs. precision.
- Prompt controls — Use optional prompt overrides with validation to keep outputs consistent.
- Persistent conversation history — Keep prior context available across sessions.
- Desktop-focused UX — Dark theme, status panel, and progress/cancel controls for long processing runs.
- Processing visibility — Console-style logs show parsing, embedding, and indexing progress.
Supported technologies
-
Desktop platforms
- Windows
- macOS
- Linux
-
Supported file types
- CSV
- XLS / XLSX
- JSON
- HTML
- XML
-
RAG stack
- LangChain (ChatOpenAI interface)
- FAISS vector store
-
Retrieval and generation
- MMR-style retrieval with configurable parameters
- Fine-grained generation controls (e.g., verbosity, reasoning effort)
Use cases
- Ask questions about large CSV exports (sales, logs, telemetry) without writing custom queries.
- Explore spreadsheet-based datasets with conversational filtering and summaries.
- Build a quick “local knowledge base” from structured files for internal analysis.
- Compare answers under different retrieval settings to validate consistency.
- Share a repeatable QA workflow with teammates using the same configuration.
Notes
- Requires an API key for the configured LLM/embedding service; a custom base URL can be set if needed.
- Indexing progress and status are visible during ingestion, with cancel support for long jobs.