User Tools

Site Tools


start:hype_tutorials:automatic_calibration

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
start:hype_tutorials:automatic_calibration [2018/09/06 15:14]
cpers [Differential Evolution Markov Chain method (task DE)]
start:hype_tutorials:automatic_calibration [2019/02/25 16:58]
cpers [Quasi-Newton methods (task Q1 and Q2)]
Line 7: Line 7:
 There are in total **//nine methods of optimization//​** to choose from in HYPE. The sampling methods are a basic Monte-Carlo simulation with random parameters values chosen within a user-specified parameter interval, and two progressive Monte-Carlo simulations where the Monte-Carlo simulations are made in stages with a reduced parameter space in between the stages. In addition it is possible to run an organized sampling of two parameters. The Differential Evolution Markov Chain method combines a genetic optimization algorithm with random sampling. The directional methods are the Brent method, two versions of quasi-Newton methods with different ways to calculate the gradient, and the method of steepest decent. ​ There are in total **//nine methods of optimization//​** to choose from in HYPE. The sampling methods are a basic Monte-Carlo simulation with random parameters values chosen within a user-specified parameter interval, and two progressive Monte-Carlo simulations where the Monte-Carlo simulations are made in stages with a reduced parameter space in between the stages. In addition it is possible to run an organized sampling of two parameters. The Differential Evolution Markov Chain method combines a genetic optimization algorithm with random sampling. The directional methods are the Brent method, two versions of quasi-Newton methods with different ways to calculate the gradient, and the method of steepest decent. ​
  
-Given enough sampling points, ​even the simple **//​sampling method//** can give a rough estimate of the optimum. An advantage of the sampling methods is that the number of function evaluations,​ and thus the computation time, is determined by the user. The sampling methods are useful to provide a starting point for the directional optimization methods.+Given enough sampling points, the simple **//​sampling method//** can give a estimate of the optimum. An advantage of the sampling methods is that the number of function evaluations,​ and thus the computation time, is determined by the user. The sampling methods are useful to provide a starting point for the directional optimization methods.
  
 The **//​Differential Evolution Markov Chain//** (DEMC) provides an uncertainty estimate of the optimum. The genetic algorithm (i.e. DE) works by proposing new members (parameter values) and then accepting or rejecting them. In addition to the random element of the creation of a proposal (by inheriting traits from other members and keeping some traits unchanged), in the DEMC method a random number is added to the proposed parameters and the proposal may be accepted by a certain probability even if the objective criterion is worse than for the replaced member. The advantage of DEMC versus plain DE is both the possibility to get a probability based uncertainty estimate of the global optimum and a better convergence towards it. The **//​Differential Evolution Markov Chain//** (DEMC) provides an uncertainty estimate of the optimum. The genetic algorithm (i.e. DE) works by proposing new members (parameter values) and then accepting or rejecting them. In addition to the random element of the creation of a proposal (by inheriting traits from other members and keeping some traits unchanged), in the DEMC method a random number is added to the proposed parameters and the proposal may be accepted by a certain probability even if the objective criterion is worse than for the replaced member. The advantage of DEMC versus plain DE is both the possibility to get a probability based uncertainty estimate of the global optimum and a better convergence towards it.
  
-The **//​directional methods//** progress iteratively from one set of model parameters to a new set that have a better objective criterion. This is achieved by determining a direction of improvement,​ and then the optimal step length in that direction. The determination of the direction is what separates the different optimization methods. It is given by one parameter and the direction between the last two best parameter sets (for Brent method), or by a function of the gradient of the objective function. The methods using the gradient are more powerful, but require more evaluations. The directional methods depend on a starting point for their iterations. This choice of the starting point is important for the performance of the methods. It influences the calculation time and possibly which (local) optimum that is reached.+The **//​directional methods//** progress iteratively from one set of model parameters to a new set that have a better objective criterion. This is achieved by determining a direction of improvement,​ and then the optimal step length in that direction. The directional methods assume there exist a minima within the space. The determination of the direction is what separates the different optimization methods. It is given by one parameter and the direction between the last two best parameter sets (for Brent method), or by a function of the gradient of the objective function. The methods using the gradient are more powerful, but require more evaluations. The directional methods depend on a starting point for their iterations. This choice of the starting point is important for the performance of the methods. It influences the calculation time and possibly which (local) optimum that is reached.
  
 The automatic calibration algorithm is controlled by means of two or three **//​files//​**:​ [[start:​HYPE_file_reference:​info.txt|info.txt]] and [[start:​HYPE_file_reference:​optpar.txt|optpar.txt]],​ and for some methods [[start:​HYPE_file_reference:​ qnstartpar.txt|qNstartpar.txt]]. The following sections present and discuss the entries and numerical parameters of those two files, necessary and/or optional to use the automatic calibration. ​ The automatic calibration algorithm is controlled by means of two or three **//​files//​**:​ [[start:​HYPE_file_reference:​info.txt|info.txt]] and [[start:​HYPE_file_reference:​optpar.txt|optpar.txt]],​ and for some methods [[start:​HYPE_file_reference:​ qnstartpar.txt|qNstartpar.txt]]. The following sections present and discuss the entries and numerical parameters of those two files, necessary and/or optional to use the automatic calibration. ​
Line 145: Line 145:
   * The derivation of a new parameter set to be tested (the mutation) is also governed by the code ''​DEMC_sigma''​. "​Sigma"​ and the parameter precision determines how much a random perturbation will influence the new parameter set. "​Sigma"​ is the base of the perturbation. The value is the standard deviation of the sample error. Set to 0 if you don’t want to use it. Default is 0.1. The sigma value is multiplied with 3rd-row value for each parameter (the precision) to determine the size of the random perturbation. ​   * The derivation of a new parameter set to be tested (the mutation) is also governed by the code ''​DEMC_sigma''​. "​Sigma"​ and the parameter precision determines how much a random perturbation will influence the new parameter set. "​Sigma"​ is the base of the perturbation. The value is the standard deviation of the sample error. Set to 0 if you don’t want to use it. Default is 0.1. The sigma value is multiplied with 3rd-row value for each parameter (the precision) to determine the size of the random perturbation. ​
   * The probability to use the new parameter set is governed by the code ''​DEMC_crossover'',​ which is the probability to not use the mutated parameter values. Use 1 to always test the mutation (default), or < 1 to cross over some parameter values from parent generation (recommended if you have large number of parameters). In the example of [[start:​hype_file_reference:​optpar.txt|optpar.txt]] file shown in Fig 7, the 9th line indicates that the user wants 40% probability to keep the previous/​parent parameter set and not to use the mutation.   * The probability to use the new parameter set is governed by the code ''​DEMC_crossover'',​ which is the probability to not use the mutated parameter values. Use 1 to always test the mutation (default), or < 1 to cross over some parameter values from parent generation (recommended if you have large number of parameters). In the example of [[start:​hype_file_reference:​optpar.txt|optpar.txt]] file shown in Fig 7, the 9th line indicates that the user wants 40% probability to keep the previous/​parent parameter set and not to use the mutation.
-  * As default, only new parameter sets with better optimization criterion is accepted for the next generation. The code ''​DEMC_accprob''​ is used to switch on the possibility to accept also less good parameter sets. If used, a proposed parameter set is accepted with increasing probability the better its performance is compared to the best parameter set so far. Set to 1 will maximize the probability to accept a proposal, set to 0 to only accept proposals with better performance than the parent generation (default). In the example of [[start:​hype_file_reference:​optpar.txt|optpar.txt]] file shown in Fig 7, the 10th line indicates that the user wants new generations to be better.+  * As default, only new parameter sets with better optimization criterion is accepted for the next generation. The code ''​DEMC_accprob''​ is used to switch on the possibility to accept also less good parameter sets. If used, a proposed parameter set is accepted with increasing probability the better its performance is compared to the best parameter set so far. Set to 0 to only accept proposals with better performance than the parent generation (default), set to 1 to turn on the probability to accept worse proposals. In the example of [[start:​hype_file_reference:​optpar.txt|optpar.txt]] file shown in Fig 7, the 10th line indicates that the user wants new generations to be better.
  
 The specification of calibration parameters start on line 22 in [[start:​hype_file_reference:​optpar.txt|optpar.txt]]. Listing of the model parameters subject to optimization is achieved as described [[start:​hype_tutorials:​automatic_calibration#​specification_of_calibration_parameters_-_optpartxt|above]]. ​ The model parameters are listed in no particular order, but with three rows for each parameter. The precisions specified in the parameter listing part are used by the DEMC method to scale the random perturbation of the generation of a new proposed parameter set. The specification of calibration parameters start on line 22 in [[start:​hype_file_reference:​optpar.txt|optpar.txt]]. Listing of the model parameters subject to optimization is achieved as described [[start:​hype_tutorials:​automatic_calibration#​specification_of_calibration_parameters_-_optpartxt|above]]. ​ The model parameters are listed in no particular order, but with three rows for each parameter. The precisions specified in the parameter listing part are used by the DEMC method to scale the random perturbation of the generation of a new proposed parameter set.
Line 175: Line 175:
 |Figure 10: Example of optpar.txt file for the quasi-Newton method| |Figure 10: Example of optpar.txt file for the quasi-Newton method|
  
-The quasi-Newton methods optimise all parameters at the same time. The parameter set is optimized with the line search ​routine starting from the point of the current best parameters. The direction of the search ​is determined by the gradient ​of the criteria surface at this point. The gradient can be estimated in three different ways in HYPE, the two quasi-Newton methods described in this section and the one called steepest descent in the next section. The optimization continues until one of several interruption criteria is fulfilled.+The quasi-Newton methods optimise all parameters at the same time. The direction of the search ​is determined by the gradient of the criteria surface at the point of the current best parameters. The parameter set is optimized with the line search ​routine along the line determined by the gradient. The gradient can be estimated in three different ways in HYPE, the two quasi-Newton methods described in this section and the one called steepest descent in the next section. The optimization continues until one of several interruption criteria is fulfilled.
  
 Calculating the gradient for the quasi-Newton method involves updating the inverse Hessian matrix. This can be done by two methods, both described in Nocedal and Wright (2006). Task Q1 uses the DFP (Davidon-Fletcher-Powell) method and task Q2 uses the BFGS (Broyden-Fletcher-Goldfarb-Shanno) method. Calculating the gradient for the quasi-Newton method involves updating the inverse Hessian matrix. This can be done by two methods, both described in Nocedal and Wright (2006). Task Q1 uses the DFP (Davidon-Fletcher-Powell) method and task Q2 uses the BFGS (Broyden-Fletcher-Goldfarb-Shanno) method.
start/hype_tutorials/automatic_calibration.txt · Last modified: 2024/01/25 11:37 (external edit)