
Large Language Models (LLMs) are increasingly used in scientific domains, but their reasoning often lacks the structure and rigor required for complex problem-solving. This talk explores strategies for enhancing scientific reasoning in LLMs through structured prompting, iterative dialogue, and workflow integration. By emphasizing clarity, hypothesis generation, and critical evaluation, we can guide models to produce more consistent and interpretable outputs. We examine how methods inspired by human reasoning, such as defining concepts, questioning assumptions, and exploring alternatives, can improve LLM performance across scientific tasks. These approaches promote deeper engagement with data and theory, moving beyond surface-level responses. Ultimately, we aim to highlight pathways for aligning LLM behavior with the norms of scientific inquiry.
Bio: Hassan Harb is a postdoctoral researcher in the Materials Science Division at Argonne National Laboratory. He earned their PhD in Quantum Chemistry from UC Merced, where he studied the electronic structure of lanthanide-containing molecules and developed methods to model electron detachment processes. His current research leverages quantum chemistry and artificial intelligence for the computational discovery of molecules in green chemistry and energy storage. The scope of his work integrates electronic structure methods with machine learning to accelerate discovery of molecules and materials.
See all upcoming talks at https://www.anl.gov/mcs/lans-seminars