Modelling and Control of Stochastic Networks of Resource Allocation Facilities

Lead CIs: Peter Taylor, Matthew Roughan and Nigel Bean

Many systems of significance to society can be considered as `networks of resource allocation facilities’. They are made up of processing facilities, each of which has a finite set of resources. Examples include:

  • An ad hoc mobile telephone network, allocating bandwidth and battery power on a circuit-switched basis.
  • A set of distributed data servers, such as those used by Google or Microsoft's search engines.
  • A network of doctors, beds in hospital wards, diagnostic services and intensive care facilities in a hospital.
  • An energy distribution network, consisting of sources, sinks, transmission links of varying voltage, distribution and transmission substations, transformers and demand control devices.
  • A complex weapons system, consisting of multiple types of weapons, sensors and targets, each with their own characteristics.

The managers of such systems have the problem of putting in place procedures that ensure that they operate `efficiently’. Typically, they wish the system to stay close to an operating point where access to resources is provided in a fair and timely manner, and system resources are not wasted. In general, the operating point might be described with reference to optimizing some agreed function of system-wide utility that trades off wastage with timely processing of workload.

There are several ways of achieving this. At one end of the spectrum, managers might assume centralized control, and directly make resource allocation decisions as a function of current observations. At the other end, users might make decisions that they perceive to be in their own interests. In such systems, the role of the manager is to put in place a suitable regulatory framework that governs user decisions.

Commonalities in these problems include:

  • They are driven by stochastic inputs and/or the behaviour of the system is inherently stochastic;
  • Players in the system often have only local information;
  • There are moderate to long delays between measurement and control;
  • There are heterogeneous resources (for example, some resources might be fast but `expensive’ and others slow but `cheap’); and
  • The system's response to inputs and control is non-linear.

Information about the real-time operation of such systems can be gathered via a number of mechanisms:

  • Sensors embedded by the system owner either in the facilities or the links that connect them (for example, SNMP measures of traffic in an Internet).
  • Sensors associated with users as they traverse the system (for example, road traffic congestion estimation via mobile telephone monitoring).
  • Information collected by external observers who `conduct experiments’ on the system (for example, “pings”, giving a data server's response times).

In most cases such information is imperfectly observed and, frequently, observation incurs a cost.

Current approaches for control of such systems often seem to involve either

  • A static (for example, round-robin) configuration without any real control.
  • ad hoc solutions that may appear to work, but often are not studied carefully enough to verify that is true in all situations; or
  • linearised controllers, which are known to oscillate, at least in some cases.

Specific problems of interest are

  • How do we define suitable transient and stationary measures of system performance?
  • How do we characterise `optimal performance’ and the policies that achieve it?
  • How do we decide on optimal placement of sensors and observation regimes?
  • What are cost-effective methods for analysing sensor data in real-time?
  • What type of undesirable user behavior does the system need to protect itself against?
  • How do we control selfish users?
  • Do price-signalling mechanisms have a role in controlling a decentralised system?