Search Results
Working Paper
Explaining Machine Learning by Bootstrapping Partial Marginal Effects and Shapley Values
Machine learning and artificial intelligence are often described as “black boxes.” Traditional linear regression is interpreted through its marginal relationships as captured by regression coefficients. We show that the same marginal relationship can be described rigorously for any machine learning model by calculating the slope of the partial dependence functions, which we call the partial marginal effect (PME). We prove that the PME of OLS is analytically equivalent to the OLS regression coefficient. Bootstrapping provides standard errors and confidence intervals around the point ...
Working Paper
Explaining Machine Learning by Bootstrapping Partial Dependence Functions and Shapley Values
Machine learning and artificial intelligence methods are often referred to as “black boxes” when compared with traditional regression-based approaches. However, both traditional and machine learning methods are concerned with modeling the joint distribution between endogenous (target) and exogenous (input) variables. Where linear models describe the fitted relationship between the target and input variables via the slope of that relationship (coefficient estimates), the same fitted relationship can be described rigorously for any machine learning model by first-differencing the partial ...
Working Paper
Understanding Models and Model Bias with Gaussian Processes
Despite growing interest in the use of complex models, such as machine learning (ML) models, for credit underwriting, ML models are difficult to interpret, and it is possible for them to learn relationships that yield de facto discrimination. How can we understand the behavior and potential biases of these models, especially if our access to the underlying model is limited? We argue that counterfactual reasoning is ideal for interpreting model behavior, and that Gaussian processes (GP) can provide approximate counterfactual reasoning while also incorporating uncertainty in the underlying ...
Working Paper
Explaining Machine Learning by Bootstrapping Partial Marginal Effects and Shapley Values
Machine learning and artificial intelligence are often described as “black boxes.” Traditional linear regression is interpreted through its marginal relationships as captured by regression coefficients. We show that the same marginal relationship can be described rigorously for any machine learning model by calculating the slope of the partial dependence functions, which we call the partial marginal effect (PME). We prove that the PME of OLS is analytically equivalent to the OLS regression coefficient. Boot- strapping provides standard errors and confidence intervals around the point ...
Working Paper
What Do LLMs Want?
Large language models (LLMs) are now used for economic reasoning, but their implicit "preferences” are poorly understood. We study LLM preferences as revealed by their choices in simple allocation games and a job-search setting. Most models favor equal splits in dictator-style allocation games, consistent with inequality aversion. Structural estimates recover Fehr–Schmidt parameters that indicate inequality aversion is stronger than in similar experiments with human participants. However, we find these preferences are malleable: reframing (e.g., masking social context) and learned ...