Model Governance

Introduction

Risk models have been around for a long time, often including some kind of process to manage their lifecycles. Partly in response to the credit crisis of 2008, the discipline of Model Governance is becoming more regulated and formalized. In addition to requesting model results, regulatory authorities are now interested in core understanding of the models themselves, their validation processes and underlying data quality. Case in point is today's ECB TRIM initiative. Its largest, single supervision operation to date.

As a consequence, and reflected by TRIM objectives, increased scrutiny is expected in the areas of Model Governance and Model Risk Management. Model inventories, auditable methodology/data validation processes and detailed guidelines must be in place, as a matter of course. In addition, institutions must adequately respond to a host of additional requests:

  • Can you fully reproduce historical model runs used for reported capital requirements?
  • Is the underlying data still available, can I audit their data integrity tests?
  • Where are overrides used? Which ones, and why?
  • Can I audit model evolution over time?
  • What is the root-cause of variations in results: data or model code?
  • Who is using which versions of the models, how often and for what purposes?

We can help. MonkeyProof modelSafe adds critical Model Governance capabilities to your environment, by hosting your model portfolio. Check our solutions.


ECB TRIM

The ECB launched TRIM to identify the causes of variations in calculated capital requirements among banks, stemming from internal models. The objective is to reduce this variability and to restore confidence in internal models. Expect the TRIM impact-phase to involve stricter Model Governance and Model Risk Management awareness. Are you prepared?


Up
 

MonkeyProof modelSafe

What about covering critical Model Governance capabilities from within one single environment? One reference, one user interface. Your model runs organized in projects and jobs. Model types, versions and evolution available in a single overview. Integrated audit replication runs and data quality testing. Not imposing models, but hosting yours. That's what we aim for, with MonkeyProof modelSafe.


Overview


In modelSafe, you create, run and group related model analyses (jobs) in projects, managed in version control. You can wrap runs for particular asset-classes in separate projects. Included jobs may be of any model type or version. The precondition? That your models are hosted in modelSafe.

The Project Summary offers overview of included jobs: when, what, and how. Available at any point in time. For audit replication or other reuse purposes. A modelSafe job constitutes a job file and its related binary files, containing input and results data. All stored together safely.

Code evolution of included models is easily monitored, by integrated version control capabilities. In addition to commit logs, lines of modified code are instantly highlighted. If required, an inventory management module can be included in modelSafe.

Result Metrics


The Result Metrics section lets you compare results of runs related to each other. This offers insight in capital variations and their root-causes quickly, to be discovered in job configuration settings, code modifications or evolved data.

In addition to graphical depictions, results are available numerically and can be exported to any format required. If opted for, automated reports are available to share current status internally or with authorities.


Job Scope


Equally notable as the project scope is the job scope. Within the individual jobs, users specify not only configuration settings for a run, but also scenario types and overrides (including justification). The option to clone jobs enables sensitivity analyses, if required jobs can be imported from or exported to other projects.

In addition, users and authorities alike can invoke validity tests on the underlying calculation data, as specified for that specific internal model version.

Usage Metrics


Have to prove model usage in greater detail? The Usage Metrics show which models and versions have been used, over a predefined period. If opted for, usage metrics can be propagated to an inventory module, to log usage over the model lifecycle.

Concluding, modelSafe covers your model governance, from within a single environment, backstopped by integrated version control.


SCREEN CAPTURES (click to enlarge)


Up
 
Key Features
  • Standalone application
  • Integrated Subversion or Git version control
  • Project and Jobs structure for model run capturing
  • Job cloning and audit runs available
  • Track & trace of code and data evolution
  • Integrated Result Metrics for benchmarking purposes
  • Integrated Usage Metrics for monitoring purposes
  • Data integrity testing capabilities optional
  • Flexible Application Programming Interface (API), for hosting models
  • Automated report generation optional
  • Flexible data exporting options available
  • Model inventory module optional
  • Database Connectivity solutions available
  • Implementation and services available

Up
 

Interested?

Contact us at MonkeyProof Solutions, we look forward to get to know you and to discuss your challenges.