Bio
I am a postdoctoral researcher in the Department of Electrical and Computer Engineering at Princeton University, collaborating closely with Prof. H. Vincent Poor, Prof. Sanjeev Kulkarni, Prof. Vahid Tarokh from Duke University, and Prof. Taposh Banerjee from the University of Pittsburgh. I completed my Ph.D. in Electrical and Systems Engineering at the University of Pennsylvania, where I worked under the guidance of Prof. Hamed Hassani and collaborated closely with Prof. George J. Pappas, Prof. Aritra Mitra from North Carolina State University, and Prof. Aryan Mokhtari from the University of Texas at Austin.
My research vision is to develop scalable and resilient algorithms that enable reliable decision-making in complex, multi-agent, and distributed environments. I draw upon techniques from optimization, machine learning, and statistics to tackle challenges at the intersection of theoretical foundations and practical deployment. My work addresses several key areas:
Federated and Asynchronous Reinforcement Learning: I study the non-asymptotic behavior of reinforcement learning under asynchronous and distributed conditions. My contributions include methods that ensure robust decision-making despite delays and adversarial disruptions, crucial for systems with communication constraints and delayed updates.
Distributed Decision-Making and Robust Optimization: My work on multi-agent systems includes developing algorithms for decision-making under uncertainty, focusing on resilience against adversarial influences and scalability in large networks.
Change Detection for Unnormalized Distributions: I am pioneering change detection methods for energy-based models and high-dimensional, unnormalized distributions, enabling efficient adaptation to sudden environmental shifts, a critical capability for systems operating under uncertainty.
Minimax Optimization: I explore minimax optimization across various settings, including continuous-discrete minimax optimization for robust decision-making and recommendation systems, minimax optimization with delayed gradients for scalability in distributed systems, and minimax optimization under adversarial corruption to ensure resilience in uncertain environments. My research provides fundamental tools for achieving reliable performance in dynamic conditions characterized by delays, adversarial disruptions, and mixed data structures.
Submodular Optimization and Meta-Learning: I investigate scalable frameworks for submodular maximization and adaptive meta-learning for discrete tasks, focusing on applications that demand large-scale processing and resilience to data uncertainty. My work in this area has applications in domains ranging from recommendation systems to dynamic resource allocation.
My contributions have been published in leading venues, including the Conference on Neural Information Processing Systems (NeurIPS), the International Conference on Artificial Intelligence and Statistics (AISTATS), the American Control Conference (ACC), the Conference on Decision and Control (CDC), and IEEE Transactions on Signal Processing. By advancing robust methodologies for autonomous and distributed systems, I aim to contribute to the development of intelligent systems capable of thriving in dynamic, real-world environments.