Improving Generative Ad Text on Facebook using Reinforcement Learning
Generative AI is transforming economic change. We examine the impact of reinforcement learning (RL) on LLMs for generating advertising text on Facebook, highlighting its effectiveness in improving click-through rates and user satisfaction.
MetaCLIP 2: A Worldwide Scaling Recipe
This paper presents MetaCLIP 2, a training recipe for scaling contrastive language-image pretraining to global datasets, focusing on overcoming multilingual challenges for improved zero-shot tasks.
UserBench: An Interactive Gym Environment for User-Centric Agents
UserBench introduces a benchmark to evaluate LLM-based agents in preference-driven interactions, revealing a significant disconnect between task completion and user satisfaction.
ChildGuard: A Specialized Dataset for Combatting Child-Targeted Hate Speech
ChildGuard is a dataset targeting hate speech against children, containing over 350k annotated examples, addressing current model limitations in detecting such speech.
DeepSieve: Information Sieving via LLM-as-a-Knowledge-Router
The paper presents DeepSieve, a framework leveraging LLMs to dynamically route queries for improved reasoning depth and precision, improving over traditional retrieval approaches.
Validating Generative Agent-Based Models of Social Norm Enforcement
This study proposes a validation method for generative agent-based models to simulate social behavior, enhancing our understanding of trust dynamics.
SAKE: Steering Activations for Knowledge Editing
SAKE proposes a method for knowledge editing in LLMs, enhancing the capacity to control and adapt model memorization efficiently.
FlAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model Compression
FLAT-LLM introduces a training-free structural compression method for LLMs that reduces computational load significantly while maintaining performance.
The Carbon Cost of Conversation, Sustainability in the Age of Language Models
This paper critiques the environmental impact of large language models, examining their carbon footprint and proposing sustainable practices in NLP.
From Semantics, Scene to Instance-awareness: Distilling Foundation Model for Open-vocabulary Situation Recognition
This research discusses a method to enhance situation recognition using foundation models, improving generalization and resilience for open-vocabulary tasks.