Widening the frame: Rational choice beyond a given utility function
Supply chain and logistics planning problems can be seen as optimisation problems that require collecting as much relevant information as possible, determining possible choices, and selecting the action with the highest expected utility. They thus lend themselves to AI solutions that use the same model: “… we build optimising machines, we feed objectives into them, and off they go.” (Russell 2019, 172). “Rational choice” in this sense assumes a given utility function. But apart from well-known problems with rational choice in real-world environments (e.g. uncertainty, dynamic changes, other agents, non-discreteness of actions), we know from the human example that highly complex choices in real-world environments require metacognition, e.g. considering which utility function to use, whether our reasoning is trustworthy, whether knowledge is sufficient, whether to act now or to optimise the decision further, whether a course of action is ethical. Humans (and certain animals) are able to change the frame of reference and move to metacognition, when needed. The supply chain and logistics planning problems are a fine place for a case study of this metacognition problem in a practical environment. When and how should a system say: “It is best not decide this and act now, I should change the frame”?