I here propose the model of Markov Networks (MN) and I speculate on how they can serve as a framework for forming dynamical sepsets amongst players, based on their reciprocal beliefs. These beliefs can be expressed in the context of Bayesian inference, where the prior is an assigned, private, musical personality. The players communicate their affinity preferences over a computer network. The implementation of such model is realised in Max and uses a graphical user interface to represent the status of the undirected graph.
First, let’s remind ourselves of the Bayes rule huh…
MN’s are also known as Markov Random Fields and they originated from modelling ferromagnetic materials. In this context of statistical physics, the model used is the Ising model, representing the spin of atoms in a grid.
A formal definition is actually quite intuitive and can be stated as follows:
A Markov Network is a random field S, which is a collection of indexed random variables (either discrete or continuous) where any variable is independent of all other variables in S.
For this model I use a dynamic Gibbs distribution which has factors with scopes that are continuously re-assigned in real-time, by the players and their relative local preferences. These preferences are expressed as a function of their inference with respect to what kind/type of players the others are. This is private information and has to be inferred from the sonic evidence. There are four players and each can only instantiate one connection, drawing an edge between themselves and another player. They are represented by four-coloured nodes. I suppose one could call ’em Mr/Miss Red, Mr/Miss Green, etc… (RGBY).
There are also four possible type of players: cooperative, non-cooperative, chaotic and solipsistic.
Affinity preferences are chosen according to a local pairwise distribution, as players are trying to optimise their joint assignments:
Players can choose between a GUI where they connect/disconnect via pressing keys on their laptop, another where the same events are triggered by a simple joystick that has four coloured buttons or yet another GUI that is operated via face recognition, realised in Processing using computer vision and sending the relevant info to Max via OSC.
Here is a screenshot:
Finally, you might wonder how it sounds huh?
Here is the very first trial of this interaction model, performed on 03.12.2015 @ SARC with Anne La Berge, Robert van Heumen, Ricardo Jacinto and myself:
After the above instance, having collected feedback, impressions and suggestions from the players, I decided to make substantial changes to the model, both in terms of how the GUI presented itself to the players, and to the static/hardcoded nature of the interface. This only allowed four players to interact in the network and represented an obvious limitation both in terms of creative outcome and combinatorial potential of the model.
The GUI now looks like this:
On the left, the player’s GUI. On the right, the pop-up screen for the set-up.
The baptism of MRF_beta was kindly done by SARC’s resident experimental ensemble QUBe, lead and directed by Dr. Paul Stapleton, on 23.02.2016
Below, a video of the occasion: