Robotics
We focus on creating algorithms to enable robots to see, understand, and safely act in real-world settings. Among our directions are vision–language-based robot learning, world models for predicting future outcomes, planning in changing environments, and safety-aware CBF control. Through these and related efforts, we aim to build robots that can reliably handle diverse, open-ended tasks alongside people.
Selected Publications
- MapleGrasp: Mask-guided Feature Pooling for Language-driven Efficient Robotic
Grasping
Proceedings - IEEE Winter Conference on Applications of Computer Vision, WACV, 2026 (Accepted)
- RoboPEPP: Vision-Based Robot Pose and Joint Angle Estimation through Embedding
Predictive Pre-Training
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2025
- OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided
Trajectory Generation
Advances in Neural Information Processing Systems, 2025
- MultiTalk: Introspective and Extrospective Dialogue for Human-Environment-LLM
Alignment
Proceedings - IEEE International Conference on Robotics and Automation, 2025
- Collision Avoidance for Convex Primitives via Differentiable Optimization-Based
High-Order Control Barrier Functions
IEEE Transactions on Control Systems Technology, 2025
Systems & Control
Our research develops advanced control methods for a broad spectrum of systems, spanning areas such as multi-agent networks, nonlinear and resilient control, and decentralized large-scale architectures. By bridging rigorous theoretical analysis with practical applications—including safe and learning-based control—we aim to solve fundamental challenges in the coordination of next-generation dynamic systems.
Selected Publications
- Control of max-plus linear systems using feedback cycle shaping
Automatica, 2025
- A Matrix Pencil Formulation for Nonconservative Realization of Scaling-Based
Controllers for Feedforward-Like Systems
IEEE Transactions on Automatic Control, 2024
- State constrained stochastic optimal control for continuous and hybrid dynamical
systems using DFBSDE
Automatica, 2023
- Learning a Better Control Barrier Function
Proceedings of the IEEE Conference on Decision and Control, 2022
- A dynamic high-gain design for prescribed-time regulation of nonlinear
systems
Automatica, 2020
Secure Cyber-Physical Systems
Our research advances security for Cyber-Physical Systems (CPS) through resilient defensive methods and intelligent monitoring. Work in this area includes real-time anomaly detection for networks and embedded controllers, threat detection for CPS components, and methods to strengthen critical infrastructure against cyber-physical attacks. We leverage static/dynamic program analysis, system telemetry (e.g., hardware performance counters), and ML-driven observability to detect, localize, and respond to security threats in complex CPS environments
Selected Publications
- Tamper-Proof Network Traffic Measurements on a NIC for Intrusion Detection
IEEE Transactions on Network and Service Management, 2025
- Combining switching mechanism with re-initialization and anomaly detection for
resiliency of cyber–physical systems
Automatica, 2025
- EnIGMA: Interactive Tools Substantially Assist LM Agents in
Finding Security Vulnerabilities
Proceedings of Machine Learning Research, 2025
- REMaQE: Reverse Engineering Math Equations from Executables
ACM Transactions on Cyber-Physical Systems, 2024
- NYU CTF Bench: A Scalable Open-Source Benchmark Dataset for
Evaluating LLMs in Offensive Security
Advances in Neural Information Processing Systems, 2024
Trustworthy & Resilient AI
This area is dedicated to providing formal, provable guarantees for the stability, safety, and resilience of complex AI and machine learning systems. We establish theoretical limits and develop practical methods for certified robustness in deep neural networks (DNNs), including defending against backdoor attacks and providing provably robust perceptual similarity metrics. A key focus is on applying formal verification techniques to complex systems like segmentation models using randomized smoothing and diffusion models. Furthermore, we develop switching and re-initialization strategies to enhance resilience of CPS against attacks, guaranteeing the mean-square boundedness of system states through formal analysis.
Selected Publications
- Detecting All-to-One Backdoor Attacks in Black-Box DNNs via Differential
Robustness to Noise
IEEE Access, 2025
- LipSim: A Provably Robust Perceptual Similarity Metric
12th International Conference on Learning Representations, ICLR, 2024
- Novel Quadratic Constraints for Extending LipSDP beyond
Slope-Restricted Activations
12th International Conference on Learning Representations, ICLR, 2024
- Towards Better Certified Segmentation via Diffusion Models
Proceedings of Machine Learning Research, 2023
- Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor
Detection
IEEE Transactions on Information Forensics and Security, 2023