X hits on this document





2 / 5

Barrie & Hibbert’s Response to CEIOPS-CP 56: Draft L2 Advice on Tests and Standards for Internal Model Approval


Expert judgement may be subject to biases or other shortcomings. These limitations must be acknowledged and solutions be implemented to reduce their detrimental effects, taking into account the materiality and significance of the expert judgement used. The requirements of Article 119(2) also apply to expert judgement (cf. Section where suitable. In addition, expert judgement is only admissible if it was derived using a scientific method and meets the following three requirements:

Empirical testing: Expert judgement must be falsifiable, refutable and testable. Validation and documentation: Expert judgement must be validated and documented (cp. Chapter 8 and 9).

Error rate: Expert judgement must have a known or potential error rate, and standards concerning the operation of its methodology must exist and be maintained.

We recognise that the definition, management and control of expert judgment is a particularly problematic area for CEIOPS. So far as we are aware, there is no consensus on expert judgment methodology. We would describe the exercise of expert judgment as a situation where an individual or expert group makes use of a number of sources of information including (but not limited to) quantitative/statistical models, mental models, heuristics, past experience, results in similar fields of analysis and then weights information to form subjective overall opinions. For certain questions, different experts may come to quite different conclusions i.e. there is genuine uncertainty about the „true‟ model and parameters. As such, the idea that expert judgment must be „derived using a scientific method‟ seems to suggest that, in reality, much expert judgment

  • as we would define it cannot be used at all. Further, asking the expert to codify every decision could be

viewed as being much like asking a tennis player to write down the rules they had used to calculate the position and speed of a moving tennis ball. Judgement, by its nature will not be easy to describe.

We (a model and assumption provider) seek to codify as much of our modelling and model calibration practice as possible. Nevertheless, there are a number of judgments made by our analysts which are material to results but which we believe actually fall within a range of reasonable answers to difficult questions. This uncertainty may be reduced by careful analysis but it cannot be eliminated given that finance is not the same as hard science. As a result, models of social systems are exposed to a greater level of model risk. It would be helpful if the „soft‟ nature of expert judgment were recognized in some way in the level-2 text.

As an aside, this genuine uncertainty surrounding key assumptions does create a real dilemma. Does a regulator allow different experts to reach different (reasonable) conclusions or impose consistency for the sake of comparability across firms?

The requirement that expert judgment „must have a known or potential error rate‟ cannot always be complied with. Consider the data used in an ESG supporting an internal model and calibrated to a 1-in-200 year market shock. The error rate in the expert judgment involved in this calibration cannot be known. There are not enough credible and reliable years of data to allow a calibration to be produced without using expert judgment. The same limitation in available data that require the expert judgment do not allow the error rate in the expert judgment to be calculated (other than many years into the future). In reality, if the error rate is known, we probably don‟t need to exercise expert judgment. This looks to us an awful lot like the classic „Catch 22‟.


Page 2

Document info
Document views4
Page views4
Page last viewedSun Oct 23 22:14:50 UTC 2016