Leadership
Zico Kolter
Robustness and AI Security, VLM training, few/zero-shot classifier evaluation, equilibrium models
Graham Neubig
Evaluation/verification of LLM outputs, LM training methods, retrieval-based LLMs, LLMs with external tools
Aditi Raghunathan
Algorithms for finetuning foundation models, robustness/reliability, understanding representation
Chenyan Xiong
LLM pretraining, retrieval-augmentation, data-centric LLMs, embedding models
Members of the FLAME Center
Language for robots, LLM-based planning, other signals
Hardware algorithms, long context generation, FMs for new material
System support for ML, LLMs on local devices, scaling open language models
Multiagent systems, ethics, societal implication
Foundation models for music
Pragmatics of language, context, generating/following instructions, code models, interfaces, multimodal dialog
Finetuning LLMs for neuro-symbolic reasoning, autonomous scientific research, robotics
State space models, network architectures, audio models
Generative language models for coherent and engaging narratives, leveraging models as creative tools
Support for LLMs, computational efficiency, speculative execution, GPU memory
LLMs, efficient fine-tuning, inference
NLP in medicine, decision support, evaluation outside labeled datasets, robustness
LLM understanding, neuro-symbolic architectures, reliability
Algorithms for finetuning foundation models, robustness/reliability, understanding representation
Vision language models, long-tailed recognition, image generation
Multimodal models, visual language reasoning
Ethical risks, LLM limitations w.r.t. reasoning, social intelligence
Privacy, robustness, finetuning, LLM validation, federated learning
Efficiency of LLMs, computation, data, applications outside CS
Multimodal learning, Human-AI Collaboration, LLM Evaluation
Foundation models for speech
LLMs for math and theorem proving, inference algorithms
Intersection of HCI and NLP, model mapping to use cases
Responsible AI, Interactive learning, Economic aspects of machine learning
System support for LLMs, pretraining, finetuning, federated learning, private inference, acceleration, Vicuna model, AI for science