The role of agency in causation is empirically obvious. When I move my hands and grab the coffee mug and drink coffee the cause of these events is me (the agent). No one in his right mind can argue that water molecules mixed with coffee conspired to enter my mouth. You cannot explain these events with the laws of physics. It was my mind directing my muscles to grab the coffee mug and drink coffee. The laws of physics are involved throughout the process but obviously the real cause is the agent (my mind). If you are a persistent reductionist you can keep arguing. You can say that the mind is the resultant of brain function and the brain function consists of connected neurons and the neurons are made from atoms and molecules and therefore the ultimate cause is the conspired actions of all the components in the brain. I am not impressed by this reductionist argument.
My favorite science journalist Natalie Wolchover recently wrote about Erik Hoel’s “causal emergence” approach in this article. Erik Hoel developed mathematical arguments to defend the view that the agency description of causality is scientifically valid.
“If you just say something like, ‘Oh, my atoms made me do it’ — well, that might not be true. And it might be provably not true.” – Erik Hoel
The key word is “provably.” I think the work on the “causal emergence” is only beginning. He and other young scientists like him have a long way to go.
For more details you can read Hoel’s essay “Agent Above, Atom Below” . His mathematical paper was published in Entropy: When the Map Is Better Than the Territory. Here’s the abstract of the that paper:
“The causal structure of any system can be analyzed at a multitude of spatial and temporal scales. It has long been thought that while higher scale (macro) descriptions may be useful to observers, they are at best a compressed description and at worse leave out critical information and causal relationships. However, recent research applying information theory to causal analysis has shown that the causal structure of some systems can actually come into focus and be more informative at a macroscale. That is, a macroscale description of a system (a map) can be more informative than a fully detailed microscale description of the system (the territory). This has been called “causal emergence.” While causal emergence may at first seem counterintuitive, this paper grounds the phenomenon in a classic concept from information theory: Shannon’s discovery of the channel capacity. I argue that systems have a particular causal capacity, and that different descriptions of those systems take advantage of that capacity to various degrees. For some systems, only macroscale descriptions use the full causal capacity. These macroscales can either be coarse-grains, or may leave variables and states out of the model (exogenous, or “black boxed”) in various ways, which can improve the efficacy and informativeness via the same mathematical principles of how error-correcting codes take advantage of an information channel’s capacity. The causal capacity of a system can approach the channel capacity as more and different kinds of macroscales are considered. Ultimately, this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscale description.” Eric Hoel (Entropy)
In these discussions one can invoke Quantum Mechanics. Erik Hoel did not. I do not intend to mention Quantum Mechanics in this context either. It will only confuse the reader more. My only purpose here is to bring Hoel’s arguments to your attention. Did you think I had anything original to add ? 🙂