In the last decade or so, there has been a dramatic increase in storage facilities and the possibility of processing huge amounts of data. This has made large high-quality data sets widely accessible for practitioners. This technology innovation seriously challenges inference methodology, in particular simulation algorithms commonly applied in Bayesian inference. These algorithms typically require repeated evaluations over the whole data set when fitting models, precluding their use in the age of so called big data.