Generative Models for Language
Overview
Large language models generate fluently but often ignore hard constraints — format requirements, factual grounding, logical consistency, safety boundaries. My work investigates how to inject constraints directly into the generation process rather than relying on post-hoc filtering, through techniques spanning decoding-time guidance, constrained fine-tuning, and reward-shaped post-training.
This includes enforcing structural constraints (syntax, format, schema) at decoding time without degrading fluency, balancing hard constraints with soft preferences, and understanding how these methods scale.