Estimation of Distribution Algorithms (EDA) all use probabilistic models to replace the selection and variation operators in
traditional evolutionary algorithms. A probabilistic model is constructed from a population of evaluated solutions. That model is
then sampled to produce a successor population and so on.
By far the most common approach to building a probabilistic model is Bayesian Networks (BN). In this research, we explore an
alternative approach to modelling the distribution, which differs from the mainstream in two key aspects. Firstly the jpd is
modelled as an undirected graph (Markov Network) and factorised as a Gibbs distribution. Secondly, the construction of the model
uses solution fitness values directly to determine the parameters of the distribution. Thus our approach learns the fitness
function. The approach has benefits in improving evolution but at some computational expense in building the model.
We have demonstrated the costs and benefits of a Markov Network EDA by comparing with other EDAs on a range of problems.
Currently our research is exploring the relationship between optimization and the quality of the probabilistic model in binary
EDAs that use Markov Network models.