Shaping the future of responsible AI for national security and scientific discovery

Bhavya Kailkhura

At the forefront of LLNL’s strategy for artificial intelligence development and deployment is staff scientist Bhavya Kailkhura. As a council member of LLNL’s Data Science Institute and a key member of the Center for Applied Scientific Computing (CASC) Division, Kailkhura is forging the path forward for safe and trustworthy AI, spearheading efforts centered on building confidence in AI for national security applications. 

Path to Livermore 

Growing up in a remote Himalayan region of India, Kailkhura’s passion for problem-solving and innovation was ignited early by the challenges he witnessed firsthand, such as limited access to clean water and reliable energy. Motivated to find solutions, he invented innovative solutions for water storage and purification, as well as sustainable energy approaches, by using locally available resources. His work eventually led to the recognition and honor of meeting India’s then President and renowned scientist, Dr. A.P.J. Abdul Kalam. This pivotal interaction reinforced Kailkhura’s belief in the transformative potential of science and technology, propelling him to pursue a bachelor’s degree in electrical engineering. After finishing his undergraduate studies at age 19, Kailkhura continued his educational journey by moving to the United States where he earned his Ph.D. in electrical and computer engineering from Syracuse University. Kailkhura’s graduate studies focused on developing methods to learn from data and make reliable decisions, even when parts of that data are misleading, corrupted, or intentionally deceptive. Driven by a desire to apply his work to real-world problems, Kailkhura joined LLNL as an intern in 2015. Drawn to LLNL’s mission-driven culture, interdisciplinary environment, and cutting-edge computing capabilities, he returned as a postdoctoral researcher in 2016. He transitioned to a staff scientist role in 2018 where he continues to advance safe and trustworthy AI techniques to address complex scientific and national security challenges.

Mission-Driven Research and Collaboration

Kailkhura’s multidisciplinary work at Livermore centers on advancing frontier AI research that supports LLNL’s mission of tackling high-stakes problems and the intersection of science, technology, and national security.  Leading the “Safe and Responsible AI” subject area in the Data Science Institute, he works with cross-functional teams to promote the deployment of AI from a perspective of safety and security. By partnering with experts in academia and industry, Kailkhura and his team are not only advancing AI, but they are also developing safety guardrails to help ensure AI technologies are beneficial, reliable, and aligned with LLNL’s mission needs. “The more capable AI becomes, the more important it is to ask what new safety and security risks it might introduce,” says Kailkhura. “AI failures could behave in unfamiliar ways, and if we don’t focus on solving these challenges now, we may be unprepared to manage their consequences.”

According to Kailkhura, the risks associated with AI are both intentional and unintentional. On the one hand, adversaries may deliberately exploit AI systems, e.g., through cyber-attacks or data poisoning, posing clear security threats. On the other hand, users themselves may inadvertently introduce risk by misunderstanding how AI systems function or where they are likely to fail. These vulnerabilities are at the center of his research where he works closely with domain experts across programmatic areas of materials science, energetics, and biology using AI. Through these collaborations, his team is building foundational methods for evaluating, stress-testing, and aligning AI models with scientific and national security objectives.

Vision for the Future 

Kailkhura’s journey has shaped a vision grounded in purpose and impact. “Redefining how AI and science converge will unlock the next era of innovation and discovery,” Kailkhura says. By integrating advanced AI seamlessly into scientific workflows, he envisions a future where research barriers are efficiently overcome, enabling transformative discoveries across diverse fields. By pushing the frontier of AI research with safety and trust embedded at its core, Kailkhura aims to accelerate scientific progress, foster interdisciplinary collaboration, and ensure AI delivers meaningful and lasting benefits to society.

Bubble Blurb

Shaping the future of responsible AI for national security and scientific discovery