Table of Contents

Automatic calibration with HYPE

Introduction

The hydrological simulation system used to run the hydrological model HYPE contains a number of optimization methods, providing a way to automatically calibrate some of the model parameters. Given a model setup (i.e. a catchment’s division into subbasins, a specified time period, and recorded precipitation, temperature and river flow in that period), the automatic calibration algorithm optimizes an objective function by modifying a user-specified set of model parameter values. Most of the optimization methods and functionalities implemented fall into two categories: sampling methods and directional methods.

There are in total nine methods of optimization to choose from in HYPE. The sampling methods are a basic Monte-Carlo simulation with random parameters values chosen within a user-specified parameter interval, and two progressive Monte-Carlo simulations where the Monte-Carlo simulations are made in stages with a reduced parameter space in between the stages. In addition it is possible to run an organized sampling of two parameters. The Differential Evolution Markov Chain method combines a genetic optimization algorithm with random sampling. The directional methods are the Brent method, two versions of quasi-Newton methods with different ways to calculate the gradient, and the method of steepest decent.

Given enough sampling points, the simple sampling method can give a estimate of the optimum. An advantage of the sampling methods is that the number of function evaluations, and thus the computation time, is determined by the user. The sampling methods are useful to provide a starting point for the directional optimization methods.

The Differential Evolution Markov Chain (DEMC) provides an uncertainty estimate of the optimum. The genetic algorithm (i.e. DE) works by proposing new members (parameter values) and then accepting or rejecting them. In addition to the random element of the creation of a proposal (by inheriting traits from other members and keeping some traits unchanged), in the DEMC method a random number is added to the proposed parameters and the proposal may be accepted by a certain probability even if the objective criterion is worse than for the replaced member. The advantage of DEMC versus plain DE is both the possibility to get a probability based uncertainty estimate of the global optimum and a better convergence towards it.

The directional methods progress iteratively from one set of model parameters to a new set that have a better objective criterion. This is achieved by determining a direction of improvement, and then the optimal step length in that direction. The directional methods assume there exist a minima within the space. The determination of the direction is what separates the different optimization methods. It is given by one parameter and the direction between the last two best parameter sets (for Brent method), or by a function of the gradient of the objective function. The methods using the gradient are more powerful, but require more evaluations. The directional methods depend on a starting point for their iterations. This choice of the starting point is important for the performance of the methods. It influences the calculation time and possibly which (local) optimum that is reached.

The automatic calibration algorithm is controlled by means of two or three files: info.txt and optpar.txt, and for some methods qNstartpar.txt. The following sections present and discuss the entries and numerical parameters of those two files, necessary and/or optional to use the automatic calibration.

Specification of calibration - info.txt

Figure 1: Example of info.txt file to be used for automatic calibration.

Generally speaking, the purpose of the info.txt is to govern the simulation. Most of the content of the file is the same as for an ordinary simulation. The following file content is relevant for automatic calibration:

where c_1,c_2, ... ,c_N are predefined performance criteria, and w_1,w_2, ... ,w_N are relative weighting factors. The available performance criteria and their id are listed here. The criterion id, the HYPE variable ID of the computed and recorded variables to compare, as well as the period over which the variables are averaged before calculating the criterion, is specified for each performance criterion to be included in the objective function (see the block of data marked in red and green in Fig 1). For details on format see the description of the info-file.

In the example of Fig 1 the Nash-Sutcliffe efficiency (MR2) and relative error (MRE) are calculated for daily discharge (cout and rout are compared on meanperiod 1). The two criteria are weighted together. Most weight is put on MR2 and a little on the volume error. A small weight on relative error is usually enough to minimize the volume error but still get a good NSE. In the example all observations found in Qobs.txt are used to calculate the objective function. If more than one station is found, the MR2 criterion will use the average of each station’s NSE.

Description of all available criteria and their equations can be found in the wiki (criteria equations). Several performance criteria are available in HYPE, because they are useful for different situations and variables. For example correlation (MCC) can be used if you want to calibrate on timing, but not bother with the average value, spatial criteria can be used if you want to focus on the spatial variation, and Nash-Sutcliffe efficiency adjusted for bias (MNW) can be used for calibrating water stage to avoid artefacts coming from different height systems.

Specification of calibration - optpar.txt

Figure 2: Example of optpar.txt file

The file optpar.txt defines what kind of optimization to be done. There are several different optimization methods, each with their special settings and numerical parameters. The file also specifies which model parameters to calibrate, as well as numerical specifications for those. The file is divided in three sections:

Specification of calibration parameters - optpar.txt

Which model parameters to calibrate automatically is an important decision. Choosing many parameters may take long time to simulate. Some parameters may be unsuitable to calibrate with a certain method, e.g. threshold parameters are tricky for the Brent method which assumes parabolic behaviour close to a value.

For each parameter to calibrate HYPE expects a calibration interval. In addition some methods require determination accuracy or (for the DEMC method) a parameter specific factor to scale the random noise added to the proposed next-generation of this parameter. Therefore, independent on method, three lines of values are passed for each optimization parameter, all starting with the parameter name: the calibration interval lower boundary on the 1st line, the calibration interval upper boundary on the 2nd line, and the accuracy or factor on the 3rd line, all written in decimal form.

Example: general parameter

The first parameter, in the case presented in Fig 2, is a general model parameter called rivvel (see first purple block in Fig 2). A general parameter has one value for the whole model set-up. The rivvel parameter determines the celerity of flood in the watercourse. We would like to vary it between 0.5 m/s up to 2.0 m/s. Thus we set it to the interval [0.5, 2.0]. Further we would like to determine its optimal value with no larger accuracy than three decimals. The parameter rivvel appears on three consecutive lines, starting on the 22nd line (purple arrow in Fig 2); the parameter being a scalar, each line holds only one value:

Using for each model parameter the exact same format of three lines, the corresponding calibration information is passed to the optimization methods; see purple blocks in Fig 2.

Example: one-dimensional parameters

A type of parameter that is common in calibration is the correction parameters. These are often one-dimensional parameters depending on region, for example cevpcorr (second purple block in Fig 2). This parameter adjusts the evaporation parameter up or down as compared to the value(s) given in the par.txt file. If we want to vary the evaporation between 5% of its value up to 195% of its value, we set the interval to [-0.95, 0.95]. This way of calibrating the evaporation by a correction parameter allow:

  1. the evaporation to keep its, in par.txt defined, variation between land-uses while adjusting the overall evaporation, and
  2. adjust the evaporation differently for, in this case three, different regions.

The format for one-dimensional parameters is the same as described above; three lines with minimum, maximum, and precision/factor on consecutive lines. The dimension of parameters may be parameter regions (as for cevpcorr above), or e.g. land-use, soil type or other. In the case of one-dimensional parameters, we might want to calibrate only a few of the values, e.g. only the dominant land-uses, but not all of them. To avoid calibrating the parameter value associated with a certain land-use, it is sufficient to set the lower and upper interval boundaries to the same value. The calibration algorithm will then ignore this parameter value, since its variation interval has a zero length. In this case the value specified in par.txt will be used.

The land-use dependent parameter ttmp in Fig 2 illustrates how to calibrate a few of the values of a one-dimensional parameter. Suppose the 5th and 7th land-uses dominate heavily the model setup. Since other land-uses contribute only marginally to the model, modifications of their ttmp-values will have little impact on the simulations and it is therefore reasonable to calibrate only the 5th and 7th ttmp-values. Suppose further that we expect the parameter values for those two land-uses to take a value in the interval [-2, 2], and that we wish a parameter value determination down to one decimal.

To do so, we set the lower and upper interval boundaries to equal values (any values, as long as they are equal) for all values, except the 5th and 7th where we set -2 as lower boundary, and 2 as upper boundary; finally we write 0.01 of the 5th and 7th columns on the 3rd line, in order to enforce determination by optimization of the parameter values to one-decimal precision.

Specification of optimization method and settings - optpar.txt

We discuss here method by method the upper part of the file optpar.txt, marked in blue in Fig 2. A comprehensive listing of all settings is found on the wiki-page of optpar.txt. Note that the codes and values of the settings are all case sensitive, while the calibration parameters are not. Note also that the values of the settings start on position 12 of the lines.

Basic Monte-Carlo method (task MC)

Figure 3: Example of optpar.txt file for a basic Monte-Carlo simulation

The basic Monte-Carlo method will randomly sample parameters from a uniform distribution within the parameter space. The Monte-Carlo method requires the specification of two numerical parameters. These are the number of Monte-Carlo simulations to make and how many of them to save and print out results for. The best or the best few will then be kept track of during the execution.

Suppose we want to perform a simple Monte-Carlo sampling of the criteria surface depending on the five model parameters cevp, wcfc, rrcs1 (for these three only the first value in the vector is calibrated), rivvel and damp. We require 10000 model runs and will keep the 5 best simulations.

The specification of calibration parameters start on line 22. List the model parameters in no particular order, but with three rows for each parameter. The precisions specified in the parameter listing part are not used by the sampling method, but it is required for HYPE to interpret the optpar.txt file correctly.

Progressive Monte-Carlo method with parameter space limited by best found so far (task BP)

Figure 4: Example of optpar.txt file for a progressive Monte-Carlo sampling

For this progressive Monte-Carlo method the Monto-Carlo simulations is done in stages with reduced parameter space in between the stages. The parameter space is reduced based on the best parameters found so far. For each stage, the parameter space is reduced to as small space as possible still containing all the best points, and this space is sampled for the next stage.

The method requires the specification of a few settings and numerical parameters. Suppose we want to perform a progressive Monte-Carlo sampling of the criteria surface depending on the three model parameters rivvel, pcelevadd and pcelevth. We require 100 model simulations per stage and 10 stages where the 15 best runs are retained to form the parameter space for the next stage (for a total numerical work of 100 x 10 = 1000 model runs).

The specification of calibration parameters start on line 22. List the model parameters in no particular order, but with three rows for each parameter. The precisions specified in the parameter listing part are not used for the task, but required for HYPE to interpret the optpar.txt file correctly.

Progressive Monte-Carlo method with parameter space reduced in stages (task SM)

Figure 5: Example of optpar.txt file for a progressive Monte-Carlo sampling in stages

For this progressive Monte-Carlo method the Monto-Carlo simulations is done in stages with reduced parameter space in between the stages. The parameter space is reduced based on the best parameters found so far. For each stage, the parameter space is reduced by a percentage compared to the size of the previous stage. The reduced space is centred around each best point. The thus defined reduced parameter spaces are sampled in the next stage.

The progressive Monte-Carlo method requires the specification of a few settings and numerical parameters. Suppose we want to perform a progressive Monte-Carlo sampling for the three model parameters rivvel, pcelevadd and pcelevth. We require 100 model runs per stage and per center, 10 stages where the 15 best runs are retained as center for the next stage (for a total numerical work of 100 x 10 x 15 = 15000 model runs), and we want to shrink the parameter space radius by 20% at each stage.

The specification of calibration parameters start on line 22. List the model parameters in no particular order, but with three rows for each parameter. The precisions specified in the parameter listing part are not used by the method, but required for HYPE to interpret the optpar.txt file correctly.

2-parameters organized scan (task SC)

Figure 6: Example of optpar.txt file for a 2-parameter organized scan

The 2-parameters organized scan does not require many settings; only 3 method-dependent flags and parameters have to be specified. We illustrate here the method to perform the regularly-spaced sampling of the criteria surface as dependent on the model parameters pcadd and pcelev. Assume that we require 101 sampling points of the model parameter pcadd, which is allowed to vary within the interval [0, 1], and 20 sampling points of the model parameter pcelev, which is given the bounds [0, 190]. The total numerical work thus includes 101 x 20 = 2020 model runs.

The specification of calibration parameters start on line 22. Needless to say, the order of the model parameters in the parameter listing must be consistent with the order of the numerical parameters scan_numx and scan_numy: specify the bounds for pcadd first, then for pcelev. The precisions specified in the parameter listing part are not used for the sampling task, but required for HYPE to interpret the optpar.txt file correctly.

Differential Evolution Markov Chain method (task DE)

Figure 7: Example of optpar.txt file for a Differential Evolution Markov Chain simulation

The Differential Evolution Markov Chain (DEMC) introduces uncertainty estimate in the Differential Evolution (DE) optimization algorithm by applying the Metropolis-Hastings acceptance criteria, and by adding a random number in the DE proposal generation. Another way of describing DEMC is that it is a simplified version of MCMC, where the DE proposal generation is used instead of the otherwise cumbersome MCMC jumps based on covariance and multivariate normal distributions. The genetic DE algorithm overcomes the difficult jump generation of MCMC by generating the proposal directly from the current populations. The advantage of DEMC versus plain DE is both the possibility to get a probability based uncertainty estimate and a better convergence towards the global optima. A description of DEMC can be found in ter Braak (2006) and in Rönkkönen et al. (2009).

The HYPE Differential Evolution Markov Chain (DEMC) optimization method has six numerical parameters that can be set.

The specification of calibration parameters start on line 22 in optpar.txt. Listing of the model parameters subject to optimization is achieved as described above. The model parameters are listed in no particular order, but with three rows for each parameter. The precisions specified in the parameter listing part are used by the DEMC method to scale the random perturbation of the generation of a new proposed parameter set.

Brent method (task BN)

Figure 8: Example of optpar.txt file for the Brent method

The Brent method optimizes the parameters iteratively. One iteration is composed of several steps. First the method optimizes one parameter at the time. Each parameter is optimized with a line search algorithm. After all parameters have been optimized separately, an optional step may be taken that optimize the parameter set along the line from the best point before the parameters optimizations and the new best point, i.e. a “diagonal” step. This optimization is done with the line search routine. Then the Brent method starts another iteration of optimizing the parameters one after the other. This continues until one of several interruption criteria is fulfilled.

The Brent method requires the specification of a few settings; the different method interrupters, the flag for using of the diagonal step, and the numerical parameters for the line search routine. In addition a file with starting values of the parameters is required. For a given set of model parameters, optimization of the model setup by the Brent method is achieved by specification of the following:

The specification of calibration parameters start on line 22 of the optpar.txt file. Listing of the model parameters subject to optimization is achieved as described above. The order in which the model parameters are listed in optpar.txt may be relevant for the result, since they will be optimised one by one in this order. In addition they must be listed in the same order in the qNstartpar.txt file. This file gives the starting parameter values for the line search in the Brent routine.

Figure 9: Example of qNstartpar.txt file for the Brent method, corresponding to the optpar.txt example in Fig 8.

Quasi-Newton methods (task Q1 and Q2)

Figure 10: Example of optpar.txt file for the quasi-Newton method

The quasi-Newton methods optimise all parameters at the same time. The direction of the search is determined by the gradient of the criteria surface at the point of the current best parameters. The parameter set is optimized with the line search routine along the line determined by the gradient. The gradient can be estimated in three different ways in HYPE, the two quasi-Newton methods described in this section and the one called steepest descent in the next section. The optimization continues until one of several interruption criteria is fulfilled.

Calculating the gradient for the quasi-Newton method involves updating the inverse Hessian matrix. This can be done by two methods, both described in Nocedal and Wright (2006). Task Q1 uses the DFP (Davidon-Fletcher-Powell) method and task Q2 uses the BFGS (Broyden-Fletcher-Goldfarb-Shanno) method.

The quasi-Newton methods require the specification of a few settings; different method interrupters, numerical parameters for the calculation of the numerical derivative, and numerical parameters for the line search routine. In addition a file with starting values of the parameter set is required. For a given set of model parameters, optimization of the model setup by a quasi-Newton method is achieved by specifying the following:

The specification of calibration parameters start on line 22 of the optpar.txt file. Listing of the model parameters subject to optimization is achieved as described above. The order in which the model parameters are listed in optpar.txt is relevant for the qNstartpar.txt file; they must be listed in the same order. This file gives the starting parameter values for the line search in the quasi-Newton method.

Figure 11: example of qNstartpar.txt file for the quasi-Newton methods, corresponding to the optpar.txt example in Fig 10

Steepest descent method (task SD)

Figure 12: Example of optpar.txt file for the steepest descent method

The steepest decent method is a special case of the quasi-Newton methods. Like the other quasi-Newton methods, it iteratively optimises all parameters together. The parameter set is optimized with the line search routine and the direction of the search is determined by the gradient of the criteria surface. The optimization continues until one of several interruption criteria is fulfilled.

Calculating the gradient for the quasi-Newton method involves updating the inverse Hessian matrix. This step is skipped for the steepest descent method, because it approximates the inverse Hessian matrix as a unit matrix.

The steepest descent method requires the specification of a few settings; the different method interrupters, numerical parameters for the calculation of the numerical derivative, and the numerical parameters for the line search routine. In addition a file with starting values of the parameter set is required. For a given set of model parameters, optimization of the model setup by this method is achieved by specification of the following:

The specification of calibration parameters start on line 22 of the optpar.txt file. Listing of the model parameters subject to optimization is achieved as described above. The order in which the model parameters are listed in optpar.txt is relevant for the qNstartpar.txt file; they must be listed in the same order. This file gives the starting parameter values for the line search of the steepest descent method.

Figure 13: Example of optpar.txt file for the steepest descent method, corresponding to the optpar.txt example in Fig 12

References

ter Braak, Cajo J. F. (2006). A Markov Chain Monte Carlo version of the genetic algorithm Differential Evolution: easy Bayesian computing for real parameter spaces. Stat Comput, 16:239-249, DOI 10.1007/s11222-006-8769-1

Nocedal J., and S.J. Wright (2006). Numerical optimization (second edition), Springer series in operational research, Springer 2006, Chapter 6.1 (p. 135-143)

Rönkkönen J., Li, X., and V. Kyrki (2009). The role of local and global search in solving problems with multiple global optima. Technical Report 110, Department of Information Technology, Lappeenranta University of Technology. ISBN 978-952-214-730-1