Society of minds: reflections on decision-making, control, modelling and inference with humans and machines

Multiple interactions loop when considering ML systems, individuals, and societies.

[This post accompanies a talk I gave to a general audience at NTNU in 05.Feb.2019. I added some extra references and pointers. It is mostly pointing to some general trends in the field without going into many details.]

We are living in an era of growing interest in Artificial Intelligence and Machine Learning. This surge has been mostly driven by emerging computational power, clever designed and scalable algorithms, data availability, and new models capable of delivering systems with human-level capabilities* in a set of complex tasks in natural language processing, computer vision, automatic control, and automatic decision making. As researchers, students, and industrials we are attracted to these ideas because of the big questions around intelligence (including our own, as humans) and the potential of unlocking new capabilities and positive societal, economical, and individual opportunities. As we navigate the multiplex of ideas and trends around the question of intelligence we hope to unfold and connect some components that can be crucial for human flourishment and scientific progress.

As our body of knowledge moved from conceptual and speculative arenas towards empirical and testable theories, systems and models, we see the emergence of a new engineering discipline — a human-centric engineering discipline**. As an emerging discipline, the state of the affairs today is a mix of connected components without a central all-encompassing theory, although feeding from many well-established disciplines including but not limited to control theory, information theory, statistics, optimization, computer science, neuroscience, economics, mathematics, logic, and philosophy. In this talk we intend to look very briefly at a single thread on this multiplex of ideas: machine understanding and prediction of human behavior (group and individually) and the interactions between humans, machines, and the larger environment.

Traditional recommender systems seek to estimate a function of rating for each pair of users and item, based on previous items already rated by the user (or other similar user ratings, the content of the item, or contextual information) — it is generally stated as a problem of matrix factorization and completion. The classical approach of matrix factorization for collaborative filtering consists in assuming factors for rows and columns of the matrix (user and items), using past entries as training dataset, and inferring what factors are the best to predict the entries of the matrix — a prediction problem. The rating function learned from the data will predict users’ interest in any given unseen item.

Nevertheless, by deploying predictive recommender systems we are more and more influencing also how the user will behave, dynamically influencing the behavior that initially we wanted to just predict — a growing concern of this type of situation is being addressed by the emergent field of algorithmic bias, strategic behavior, and performative prediction***. The feedback loop is introduced by deploying predictions in the world, and the world changes with those predictions, which turns predictions into productions. The predictive model is now influencing how the data shift will happen, producing new datasets that deviate from its initial training datum. Questions of stability, convergence, and sensitivity are fundamental. Even the concept of uncertainty will have a certain twist: predictions that are supposed to model (and in some sense reduce) uncertainty, when considering the performativity of such systems, we could conclude that there is indeed an increase in uncertainty (if it is unstable). This poses the problem of finding predictors that anticipate such effects, leading to algorithms for learning and inference that take them into account and compensate accordingly.

The observation of the double direction of influence is significant and points to a similar observation done by Wiener and colleagues while being involved in the transdisciplinary research program that culminated in the new field of Cybernetics — the study of control and communication in the animal and the machine. At the same time that there is a process of pushing further the point of human agency in the human-machine systems, it has become more clear that is not a pure matter of substitution of human cognition, we should be aiming at augmenting human capabilities, and cooperative cognition between different types modalities of cognition. New human-in-the-loop systems have been developed recently where the cooperation of humans and machine cognition is taken as the basic feedback loop, as well as machine capabilities of learning with a few examples, semi-supervised and utilizing simulations.

As an open-ended and more speculative turn, the concept of purpose in the decision-making process is yet obscured and hidden by the designer choices of the loss function, regularization, dataset, and the existing meta-information for each data point. There are many theoretical and computational challenges that when we are looking at task-specific system is less obvious, nevertheless, it is necessary to address them properly when thinking in broader terms on the interplay of decision-making and inference and as we deploy more general-purpose system for the public.


* The idea of human-level capabilities is conditioned on what metrics we use, nevertheless independent of particular metrics, we have observed that current ML systems have a tendency to achieve above human-level performance to any metric that is proposed after a certain time.

** Michael I. Jordan in the article “Artificial Intelligence—The Revolution Hasn’t Happened Yet” discusses various aspects of modern ML/AI as part of an emerging field of human-centric engineering discipline, combining data, inference, and decision-making in large-scale societal systems.

*** Algorithmic confounding [1] in recommender systems happens when the system models users’ preferences without taking into account how the recommendations will lead to drift/change in the preferences; performative prediction as defined in Perdomo et al. (2020) [2] is defined in the context of supervised learning, with the acknowledgment that the deployment of a predictive system will change the outcome it is trying to predict.


1. Predicting the unpredictable: Value-at-risk, performativity, and the politics of financial uncertainty. Lockwood, E. 2015. Review of International Political Economy.

2. Breaking Feedback Loops in Recommender Systems with Causal Inference. Krauth, K; Wang, Y; Jordan, M. I. 2022. ArXiv preprint.

3. Performative Prediction. Perdomo, J. C; Zrnic, T; Mendler-Dünner, C; Hardt M. 2020. ICML.

4. How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility. Chaney, A. J.B; Stewart, B. M; Engelhardt, B. E. 2018. RecSys.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s