This blog is my attempt to reflect and capture thoughts about how and why our technology seems to be so out of sync with what is actually good for our well-being and what we need to do to realign these incentives.  I'm very interested in how attention, cognition, and behavior underlie our sense of happiness and fulfillment, and the mediating role of technology in that story.

So that explains 'tech' and 'cognition', but why 'epistemology'?  I've found that during an honest review of the psychology literature, it doesn't take long to run into serious difficulties in discerning truth.  In many ways we live in a post-truth society, and the perverse incentives of the academic world have sadly made that no less true in the softer branches of science.  

There is an ongoing replication crisis in social psychology, and the way we interact is changing at such a rapid pace that traditional research methods are being left in the dust.  A famous Nature paper showed that the best way to tell whether a study in a major psychology journal replicates is with prediction markets– in other words, our common sense is currently better at revealing psychological truths than the scientific process and peer-reviewed publication.  At the core of the replication problem is a profound crisis of statistical literacy and statistical technique.  

To say anything true about our cognition and psychology requires a deep dive into the foundations of statistics and a healthy skepticism of modern publications.  Who can we trust?  How do we evaluate them?  What are the worthwhile theories?  How can we identify quality work and reliable data without recreating the experiments ourselves?  How much should we trust our intuition, and what builds our intuition in the first place?

It goes one level further.  An epistemological shift is required to re-evaluate the scientific literature, but it's also necessary to model how we think and behave.  The top computational cognitive scientists– in designing systems that replicate human cognition most accurately– are championing stronger causal inductive biases compared to the typical statistical techniques of deep learning.  

Epistemology is explicit in cognitive science, as one of its core pursuits is to model the way we form beliefs and acquire knowledge.  Researchers in this area naturally also advocate for those principles at higher levels of abstraction.

A compelling line of argument suggests that our statistical techniques should reflect our innate epistemological processes.  We have some innate ability to discern truth and model the world, and that innate kernel can be our only arbiter of truth; in so much that modern statistical practices don't reflect that, they are failing.  

To say it another way, our minds seem to work in a Bayesian way, so we should build Bayesian models of how humans reason.  When we compare several models against each other, we should also use Bayesian techniques to reason about which one is the best.  The optimal process that captures truth in science is the one that most accurately mirrors our innate truth-seeking abilities. It's turtles all the way down.  

This is one of the most interesting and important discussions of the modern era– the bedrock of the scientific method and of model building is actively and fiercely debated.  We need to re-examine how we think about statistics and how we use its techniques to describe and predict our world.  It turns out these questions are more relevant now than ever, and we're on the brink of a major paradigm shift.

Before we can build cognition-supporting technology, we need to understand cognition.  Before we can do that, we need tools to fairly evaluate the scientific claims of social psychology and tools to model human cognition.

Ultimately, I hope that with a thorough understanding of statistics and inquiry, we can build a foundation of trustworthy scholarship that will chart the way forward.  With it, we can push the future of technology towards tools that make a real, positive impact on our behavior, attention, and feelings.