skip to content
Site header image Anushka Sivakumar
Hope to put out more amazing research! Open to collaborative projects!

EMNLP 2025

SteerVLM: Robust Model Control Through Lightweight Activation Steering for Vision Language Models 🔗
Anushka Sivakumar, Andrew Zhang, Zaber Hakim, Chris Thomas

This work introduces SteerVLM, a lightweight steering module designed to guide Vision-Language Models (VLMs) towards outputs that better adhere to desired instructions. Our approach learns from the latent embeddings of paired prompts encoding target and converse behaviors to dynamically adjust activations connecting the language modality with image context. This allows for fine-grained, inference-time control over complex output semantics without modifying model weights while preserving performance on off-target tasks. Our steering module requires learning parameters equal to 0.14% of the original VLM's size. Our steering module gains model control through dimension-wise activation modulation and adaptive steering across layers without requiring pre-extracted static vectors or manual tuning of intervention points. Furthermore, we introduce VNIA (Visual Narrative Intent Alignment), a multimodal dataset specifically created to facilitate the development and evaluation of VLM steering techniques. Our method outperforms existing intervention techniques on steering and hallucination mitigation benchmarks for VLMs and proposes a robust solution for multimodal model control through activation engineering.

Flexible-length Text Infilling for Discrete Diffusion Models 🔗
Andrew Zhang, Anushka Sivakumar, Chia-wei Tang, Chris Thomas

Discrete diffusion models are a new class of text generators that offer advantages such as bidirectional context use, parallelizable generation, and flexible prompting compared to autoregressive models. However, a critical limitation of discrete diffusion models is their inability to perform flexible-length or flexible-position text infilling without access to ground-truth positional data. We introduce DDOT (Discrete Diffusion with Optimal Transport Position Coupling), the first discrete diffusion model to overcome this challenge. DDOT jointly denoises token values and token positions, employing a novel sample-level Optimal Transport (OT) coupling. This coupling preserves relative token ordering while dynamically adjusting the positions and length of infilled segments, a capability previously missing in text diffusion. Our method is orthogonal to existing discrete text diffusion methods and is compatible with various pretrained text denoisers. Extensive experiments on text infilling benchmarks such as One-Billion-Word and Yelp demonstrate that DDOT outperforms naive diffusion baselines. Furthermore, DDOT achieves performance on par with state-of-the-art non-autoregressive models and enables significant improvements in training efficiency and flexibility.


ACL 2025

Maximal Matching Matters: Preventing Representation Collapse for Robust Cross-Modal Retrieval 🔗
Hani Alomari, Anushka Sivakumar, Andrew Zhang, Chris Thomas


NATURE Journal (Evolving Systems) 2025

Microbe-drug association prediction using metapath aggregated node embeddings and self-supervised learning 🔗
K Syama, Anushka Sivakumar, Angel Arul Jothi

NEURIPS 2024

JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated Images 🔗
Zhecan Wang, J Liu, Chia-wei Tang, Hani Alomari, Anushka Sivakumar, Chris Thomas, et. al.,


NATURE Journal (IJIT) 2023

Shakey: an improved cipher for protection of IoT devices 🔗
Anushka Sivakumar, Asmi Sriwastawa, Raja Muthulagu


NATURE Journal (Evolving Systems) 2023

Microbial Biomarkers Identification for Human Gut Disease Prediction using Microbial Interaction Network Embedded Deep Learning 🔗
Anushka Sivakumar, K Syama, Angel Arul Jothi