🦞

Academic Radar

2026-05-13

Conference DDLs

ccfddl.com

SOTA Models

arena.ai

Industry News

共 0 篇论文
arXiv
[PDF] Domain-Specific Data Generation Framework for RAG Adaptation

We define RAG adaptation as the process of refining individual components of the RAG pipeline—such as the retriever, embedding model, and LLM—to better match...

原文 ↗
arXiv
RAG-HAR: Retrieval Augmented Generation-based Human Activity Recognition

# RAG-HAR: Retrieval Augmented Generation-based Human Activity Recognition. We introduce RAG-HAR, a training-free retrieval-augmented framework that leverages large language models (LLMs) for HAR. RAG-HAR computes lightweight statistical descriptors, retrieves semantically simila...

原文 ↗
arXiv
Leveraging RAG for Training-Free Alignment of LLMs - arXiv

Large language model (LLM) alignment algorithms typically consist of post-training over preference pairs. While such algorithms are widely used to enable...

原文 ↗
arXiv
Beyond Rules: LLM‑Powered Linting for Quantum Programs

In this paper, we introduce LintQ-LLM+CoT and LintQ-LLM+RAG, novel approaches that redefine the detection of quantum programming problems by employing Large Language Models (LLMs) specialized, respectively, via Chain-of-Thought (CoT) prompting and a Retrieval-Augmented Generation...

原文 ↗
arXiv
Stable-RAG: Mitigating Retrieval-Permutation-Induced Hallucinations in Retrieval-Augmented Generation

# Stable-RAG: Mitigating Retrieval-Permutation-Induced Hallucinations in Retrieval-Augmented Generation. Retrieval-Augmented Generation (RAG) has become a key paradigm for reducing factual hallucinations in Large Language Models (LLMs), yet little is known about how the order of ...

原文 ↗
arXiv
Q-RAG: Long Context Multi‑Step Retrieval via Value‑Based Embedder Training

# Q-RAG: Long Context Multi‑Step Retrieval via Value‑Based Embedder Training. Retrieval-Augmented Generation (RAG) methods enhance LLM performance by efficiently filtering relevant context for LLMs, reducing hallucinations and inference cost. However, most existing RAG methods fo...

原文 ↗
arXiv
LatentRAG: Latent Reasoning and Retrieval for Efficient Agentic RAG

Unlike explicit agentic RAG methods that generate thoughts and subqueries in the language space, LatentRAG operates in the latent space and only produces latent tokens, _i.e._, the last-layer hidden states, for thoughts and subqueries (Sec.[4.1](https://arxiv.org/html/2605.06285v...

原文 ↗
arXiv
LLMSYS-HPOBench: Hyperparameter Optimization Benchmark Suite for Real-World LLM Systems

# LLMSYS-HPOBench: Hyperparameter Optimization Benchmark Suite for Real-World LLM Systems. Large Language Model (LLM) systems have been the frontier of AI in many application domains, leading to new challenges and opportunities for hyperparameter optimization (HPO) for the AutoML...

原文 ↗