AI Research Trends 

Shared LoRA Subspaces for almost Strict Continual Learning

This paper presents Share, a new method for parameter-efficient continual fine-tuning that enables seamless adaptation across multiple tasks and modalities by sharing low-rank subspaces.

Read more

Langauge Models and Logic Programs for Trustworthy Tax Reasoning

This approach integrates LLMs with symbolic solvers for calculating tax obligations, showcasing significant improvements in accuracy and auditability.

Read more

DFlash: Block Diffusion for Flash Speculative Decoding

DFlash introduces a speculative decoding framework using a lightweight block diffusion model, achieving significant acceleration in decoding performance.

Read more

Agent2Agent Threats in Safety-Critical LLM Assistants: A Human-Centric Taxonomy

This paper presents a taxonomy exploring vulnerabilities in LLM-based assistants, proposing a framework for rigorous threat modeling.

Read more

EvasionBench: A Large-Scale Benchmark for Detecting Managerial Evasion in Earnings Call Q&A

EvasionBench introduces a benchmark for detecting evasive responses in corporate earnings calls, providing valuable insights into managerial communication.

Read more

FinCoT: Grounding Chain-of-Thought in Expert Financial Reasoning

FinCoT enhances LLM performance in financial NLP tasks by incorporating domain-specific reasoning blueprints for structured prompting.

Read more

Dicta-LM 3.0: Advancing The Frontier of Hebrew Sovereign LLMs

This paper introduces Dicta-LM 3.0, an open-weight LLM trained on Hebrew, highlighting its capability to support NLP applications in low-resource languages.

Read more

E-Globe: Scalable ε-Global Verification of Neural Networks via Tight Upper Bounds and Pattern-Aware Branching

E-Globe presents a hybrid verifier that bounds neural network outputs accurately, improving verification scalability and efficiency.

Read more

MedErrBench: A Fine-Grained Multilingual Benchmark for Medical Error Detection and Correction

MedErrBench is introduced as the first multilingual benchmark for error detection in clinical texts, offering a scalable framework to evaluate models.

Read more

From Code-Centric to Concept-Centric: Teaching NLP with LLM-Assisted “Vibe Coding”

This study presents the Vibe Coding approach, integrating LLMs to enhance students’ understanding of NLP beyond mere code generation.

Read more