When most people hear the term artificial intelligence, they picture chatbots, self-driving cars, or even image generators. But when I spoke with Professor Pat Langley, one of the senior researchers in the area of AI, it became clear that AI’s greatest potential might not be in automating our world—it might be in understanding it.
Langley doesn’t see AI as a tool for faster calculations or better outputs to questions. He sees it as a scientific partner, something that is capable of reasoning about nature the same way humans do. Our conversation wandered through decades of research, from the early stages of philosophy to programming, but at its core was one fascinating idea: Can machines actually discover scientific laws?
He traces his inspiration back to philosopher Karl Popper, who was focused on the validation of scientific theories and claimed that a logic of discovery was impossible.
That question truly struck him in the early 1970s while studying the philosophy of science. “This can’t be right,” he thought. “Why can’t we build programs that create scientific theories and laws?” It was a radical notion for its time but he went on to do his dissertation on the topic.
He built systems that could search for patterns in observations and propose laws that characterized them. Drawing inspiration from Herbert Simon and Allen Newell’s early heuristic programs at Carnegie Mellon—systems that learned to play chess or solve puzzles—Langley applied those same methods to uncover the laws of nature.
His teams built programs capable of rediscovering known scientific laws—relationships involving specific heat, density, or motion—from data. These systems didn’t just fit curves; they explored these ideas in depth. They generated new equation structures, introduced new variables, and applied its methods recursively to uncover hidden structures.
One system, called BACON, became a landmark in this area. It could detectregularities in data that hinted at deeper scientific relationships. Other systems went even further, finding not just numerical equations but also process models, which describe how processes interact dynamically in systems like ecosystems or chemical reactions.
Langley’s research and other work inspired by it has shown that despite Popper’s view that logic of discovery is possible and research on the topic is now more active than ever.
Langley often contrasts his approach with modern neural networks. “Neural nets are powerful,” he said with a smile, “but they’re not scientists.” His point wasn’t criticism—it was precision. Neural networks recognize patterns, but they don’t know why those even patterns exist. They can tell you what fits but they can’t tell you what it actually means.
That’s why Langley continues to advocate for symbolic AI and heuristic methods—approaches that combine both the data with structure, logic, and explanation. To him, intelligence does not lie in prediction, but in reasoning and justification.
He uses the phrase explainable agency to describe this vision. It’s the idea that an intelligent system shouldn’t just carry out actions but be able to explain its choices afterwards.
Listening to Langley talk, I realized how human his version of AI actually is. His systems don’t just crunch numbers; they mimic how scientists think: noticing regularities, form hypotheses, refine models, and search for better explanations.
He spoke about how these early systems could even “rediscover” physical laws from raw data—like a machine scientist retracing the steps of Newton or Boyle. Later versions moved beyond equations to process models, capable of understanding how multiple reactions or forces interact over time. This way of thinking feels less like programming and more like mentorship—teaching machines not what to know, but how to know.



