stephan@paxmann.biz

Veröffentlichungen

 

An Artificial Intelligence based Commercial Risk Assessment Matrix as a Selection Tool for Online Trust and Security Solutions in the Financial Industry

MT 2003, Havanna, Cuba - April 2003

    Abstract. The selection process for a specific Online Security and Trust solution is often based in practice on either technological security requirements or business driven security requests. The prime focus for such a selection however must be based on its optimum risk mitigation factor, which is a combination of multiple risk areas like technology, legal and commercial factors. A combined Risk Assessment Matrix will significantly improve the selection process of a commercially viable security and trust solution. Over time, however, the risk factors and risk assessments will change as more detailed knowledge and input from the risk officers will be known. Therefore an Artificial Intelligence based Risk Assessment Matrix would be able to improve its own applicability and focus and would therefore be a significant factor for a commercial usage of any Trust and Security Solution.

    Introduction

    Online Services provide a high degree of new service flexibility to Corporations and Financial Institutions. Physical boundaries do not seem to exist anymore. Service  Availability seems to be unlimited in terms of time and location. This degree of flexibility however also brings new risk areas to the Corporations and the Financial Industry, which obviously need to be covered as good  as possible. Specific legal requirements need to be met as well as technical and operational demands. No Company or Bank can afford to come up with a new Online Service which is not sufficiently secured eg. in terms of  access control and transaction processing.

    For online access control username and password solutions are in place but are more and more being substituted by other methodologies and technologies like  Tokens, Hardware or Software Digital Certificate solutions or more advanced Biometric systems. Independent of the state of the art of each solution all of them follow the same objective: To prevent the misuse of an  online service. 

    But is the objective of prevention a single track road to follow? In the first view the optimum security environment would be a technical decision, driven by the IT departments to  mitigate the technical risks. A second view could consider legal impacts. Another viewpoint would concentrate on client driven requests and business applicability of the chosen solution as this is highly important for a  successful security solution. Hence, the selection for an optimum online security solution is driven by many risk areas.

    The number of risks to mitigate in order to achieve a commercially acceptable  security solution can certainly be extended much more and will change over time as more information on risks and the quantitative and qualitative assessment will be known. A risk assessment methodology therefore could  significantly improve the security solution selection process by providing

    1. a combined Risk Assessment Matrix for the different risk areas of new online service and

    2. the possibility to  intelligently improve the assessment criteria and risk correlations based on previous input.

    The static improvement of a Risk Assessment is standard process in many Organisations. It is a timely and work loaded  process. By enhancing a static Risk Assessment with Artificial Intelligence tools the adaptation process can not only be reduced in time but could also guarantee the state of the art assessment process, individually  designed around an organisations risk management policy and needs. Such a dynamic Risk Assessment Matrix will be discussed further in the later chapters of this paper.

    Preliminary Requirements for Online Risk  Management and Assessment – Top Down versus Bottom Up

    By law Banks and Corporations are requested to perform Risk Assessment for their Financial Risks (eg. in Germany: KonTrG – Gesetz zur Kontrolle und  Transparenz im Unternehmensbereich / Control and Transparency Law; International: eg. Basle I and Basle II Agreements for Financial and Credit Risk Control etc.). These legal requirements for Risk Management need to be  implemented in a Corporate / Financial Institution Policy, in order to be binding and known to all Business Units in the Organisation.

    By defining the Risk Management Policy and Procedures it is necessary  to gain support from different areas of an Organisation to capture all their specific risks. For the specific applicability of Risk Management procedures (eg. New System Development, Online Trust and Security Services)  nevertheless, a strategic and general policy for online risk assessment needs to be in place first, which defines which risks are of substantial importance to the Corporation or Financial Institution. Based on the  overall definition of generic risk classes it is possible to determine specific risks and assessment criteria which are in line with the overall risk set-up. This determination process should be conducted in a number of  iterative steps to achieve a complete risk portfolio at the end.

    This top-down approach might be in contradiction with the "functional expert" focused view of risk management, in which the specialised  employee defines the specific risks for a product or service as he/she knows in detail when they appear, how to mitigate them and which impact they have on a specific area. Then, by taking all "functional expert"  definitions together, the overall risk classes can be specified.

    For specific risk determinations this process is totally correct. Though, the author argues that for a combined matrix definition the  individual and specific risks are of less importance to the initial set-up of a combined Risk Assessment Matrix. The combined Risk Assessment Matrix has to be set-up to get an overall risk assessment which is  constructed by the overall risks essential to the overall business success.  Although specific risks certainly do exist and are either correlated or unrelated to others, they might not be essential to the overall  business success of the Organisation. Therefore they must not be assessed in the same weight as business critical risks in order to propose the optimum Online Security Solution.

    As an example the usage  of most advanced technology for online identification of an international Organisation shall be taken. The technical risk assessment will come up with best result over all other technical solutions (which would  therefore overrule all other potential solutions). The usability and applicability might be questionable though, as it requires a specific level of technically advanced equipment. If the clients are unable to use such  an advanced system because they don´t have the preliminary system set-up, the technically best solution will fail – and therefore the overall solution. It is therefore vital to define first, which are the top risks to  mitigate, which are the top criteria to be fulfilled, which risk groups have a higher priority against others. The Risk Policy defines the importance level of the actual risks and their priority.

    When  talking about Risk Policies and Risk Assessments it must be clear that for each Organisation the specific policy will probably look different. Specific factors for the assessment process however, will be the same for a  specific topic in different industries. Eg criteria for buying a car have a lot of generic factors (seats, speed, colour) but also some specifics which apply only to a specific group of cars (eg number of valves for a  specific 3.5 liter machines). These generic factors are being discussed and approached by the set-up of a commercial Risk Assessment Matrix.

     

    Overview on Risk factors for Online Services

     Definition of Risk Influence

    Online Services via open electronic networks are threatened by a number of different risks. As a starting point for the ongoing research the author differentiates first between

    1. Service Internal risk groups
    Risks which are managed internally and affect the internal Organisation only.

    • Identification of User
    • Entitlement determination
    • Technical Environment of Server

    2. Service Inbound risk groups and
    External risks which initiate from outside the Organisation and which the Organisation can not influence.

    • Legal specifications which influence the system availability and set-up
    • Usability requests for services from clients
    • Outside attacks to internal environment

    3. Service Outbound risk groups.
    Risks which are taken outside from the system or service to the outside visibility.

    • Reputational risks due to system failure
    • Financial risk of Service Unavailability for clients.

    The reason for such a first differentiation is the risk assessment necessity. Internal Online Services don´t need to take into account Outbound Risks and can therefore be assessed in a different way. If a service is  very much dependent on outside influences (eg legal forces) this risk group has to be managed with highest priority as it may cause the highest jeopardy.

    The less influence a risk group has to the other  the more independent they have to be assessed. The more correlation exists the more crucial a particular risk group might be and the more effort has to be spent to mitigate the specific risks of that group.

     Risk Appearance

    Each of the above discussed Risks can further be detailed in major risk appearances, which might be correlated to each other.

    1. Operational
    Any organisational and administrative related risk to Online services.

    2. Technical
    Technical failure or misbehaviour of internal or external components of a system or service.

    3. Legal
    Specific legal requirements for a certain type of service or not compliant behaviour.

    4. Financial
    Any situation which affects a financial turnover or financial stability (in both ways) caused by  internal, inbound or outbound situations.

    5. Reputational
    Risks related to the external market perception and attitude of a Service or the Organisation.

    6. Business
    Risks which result in  different business performances or unexpected business development.

    Each of the Risk Appearances needs to be further detailed with specific risk criteria. These criteria have to be set-up by the "functional experts"  of the Organisation. Their structure needs to state the smallest possible risk factors, which can not be divided into smaller risks anymore. This is necessary to be able to get a clear quantitative and qualitative  assessment of each criteria. If this definition is unclear and not specific enough the overall risk assessment will fail as the interpretation will be too vague and misleading.

    As the research will  progress the Risk Appearances will be more detailed and specific entities as discussed will be defined for Online Trust and Security solutions.

    Which risks can not be tested?

    Risk Management uses statistical  measures which can be defined to a certain probability. However, everybody who has worked in a Corporation or Financial Institution will know that some risks are not really possible to address. Although Chaos Theories  have been proofed to be applicable at some instances the human nature and behaviour is often not assessable. A personal preference of the Senior Executive is probably not know until the very end of the assessment  process and the final decision has to be taken. Neither might a political decisions of the management be documented which stands in no relation to the assessed business needs or technical requirements.

     However, it is fundamental – if no other scientific model can be applied in a commercial justifiable way – to understand the Risk Assessment Matrix set-up as proposal of (more or less) objective expectations of risks  and the related outcome for Online Security and Trust Solutions, which can be quantified or qualified.

    Risk Assessment Matrix Approach

    The set-up of a combined static Risk Assessment Matrix for Online  Security and Trust Services

    As a first process the following steps have to be performed in order to construct a combined Risk Assessment Matrix:

    Step 1: Set-up and Definition of an Online Security  and Trust Services Policy – As this is also an internal process (probably organised by the Chief Risk Officer) this Policy will be generic in terms of definition and priorities of Risk Groups and needs to be  individualised for each Organisation. It is intended to provide a comprehensive and nearly complete framework for the further set-up.

    Step 2: Definition of Generic Risk Groups, which are (directly or  indirectly) defined in the Policy.

        - Based on Risk Influence and

        - Risk Appearance

    Step 3: Prioritisation of the defined Risk Groups against each other.

        - Business Essential (Kill Criteria)

        - Business Critical (Major deformation of business, with high effort to resolve issues)

        - Business neutral (Moderate expectations of  business changes)

        - Business uncritical (No business influence if risk appears)

    Step 4: Individual Definition of Specific Risk Appearances and Assessment Criteria (Quantitative and Qualitative Measures, Comparable Measures need to be defined).

    Step 5: Prioritisation of Individual  Risk Appearances and Assessment Criteria against each other.

    Step 6: Correlation definition of Risk Appearances.

    Interim Result : Individual Risk Assessment Matrix for each Risk Appearance.

    Step 7: Combination of individual Risk Matrixes into one Risk Assessment Matrix.

    Based on Online Security and Trust Services  Policy Prioritisation of Individual Matrixes.

    Evaluation and Set-up of Correlation between Risk Appearances of different Risk Matrixes (requires comparable measurements).

    Final Harmonisation of Assessment Criteria measures.

    Final Result: Combined Risk Assessment Matrix for Online Security and Trust Solutions. 

    Artificial Intelligence Methods for Risk Assessment

    As mentioned above, a dynamic approach for the development and applicability of a Risk Assessment Matrix would be better suited for a changing risk environment than a static matrix. Different AI  methods can be used in order to get such a dynamic functionality for the Risk Assessment Matrix. In this paper we propose the use of Neural Networks (NN) to establish the assessment and the probability of this  assessment using the risk criteria establish by the user. In the financial industry the use of NN is that the result of the net can't be explained, because the net works as a "black box". A better approach can be  Genetic Programming (GP), since the results can be explain from the GP equations. The use of GP will be a future work in the establishment of the Risk Assessment Matrix process.

    Neural Networks method

    Neural networks (NNs) are computer-based learning systems that have demonstrated utility in prediction, classification, and decision-making [4], [5], [6] [7]. NNs identify patterns between input  (predictor) and target (criterion) variables. These systems are typically described using the biological analogy of brain neurons. This analogy centers on the fact that neural networks do not operate on a set of  programmed instructions, as do statistical packages. Rather, they pass data through a multiple parallel processing entities (nodes or neurodes) that "learn" and adapt to the patterns that are presented to them. Data are  not stored in these entities, nor are particular answers stored at particular addresses [8]. Processing functions assume a pattern throughout the system. This pattern, developed in the iterative learning process, comes  to represent the relationships between the input variables and the target variable [6].

    Potential Advantages of the NN Prediction Approach

    Neural networks offer some advantages over traditional  statistical prediction methods. First, instead of assuming a particular form of relationship between independent and dependent variables, then using a fitting procedure to adjust the size of parameters in the model,  neural networks construct a unique mathematical relationship for a given data set based on observed patterns between explanatory variables and designated outcomes [9]. Second, the nonparametric nature of neural networks  may make them particularly suited to social science data, where normality and linearity cannot be assumed. NNs have demonstrated capacity in handling interactions and nonlinearities [8]. Third, neural networks have been  demonstrated to be fairly robust in regard to handling input corrupted by random error [10], [11], [12]. Fourth, there is some evidence that suggests that ANNs may excel over linear discriminant models with increasingly  stringent thresholds for class membership. A "threshold" refers to a minimum score that must be reached for an example to be classified into one of two classes [14]. Fifth, NNs may perform well on problems with low base  rates Gordon [13]. Finally, it appears that there is no significant disadvantage, other than length of training time, in including a large number of predictor variables in neural network analysis [10]. The network will  disregard variables that are not associated with the output by not assigning weights to those variables, leaving them at their near zero initial values. Also, it appears that intercorrelation among predictor variables  does not detract from goodness of fit [14]. This would suggest that neural networks are suitable in classification problems in which the number of predictor variables is large, and the intercorrelation among those  variables is high.

    Neural networks are not panaceas; they do not substitute for wise variable selection and accurate measurement of data. However, it can be argued persuasively that neural network models  are often superior to alternatives in terms of predictive power [8]. Neural networks appear to have potential for making outcome predictions in areas where independent variables are likely to be intercorrelated and  where data are affected by moderate levels of random error and missing data. This is likely to be the case in Risk assessment for online trust and security solution in the Financial Industry.

     Applying NN for Risk assessment for Online trust and security solutions

    For training the NN we have information about risks and their specific risk criteria. We have information about the assessment of the risks  and risk groups in relation with the risk criteria and information of global risk if we join all individual risks.

    The input for the NN is a vector that contains information about all risk criteria, and the output of  the NN it could be a vector containing the assessments of the Risk Assessment Matrix and the probability of each assessment.

    Another possible solution for applying NN to the Risk Assessment is that the  output of the net is the assessment of the global risk with an associated probability.

    Next Activities

    The authors have explained the reasons for a Neural Networks based Risk Assessment Matrix for  the selection process of Online Trust and Security Solutions. With the combination of multiple Risk Areas and intelligent assessment methods Organisations will be able to speed up the selection process and to reduce the  assessment mistakes due to a large risk criteria base, selected over time. It is now necessary to define in detail the specific Risk Appearances, Risk Criteria and Risk measures for Online Trust and Security Solutions,  followed by the definition of input variables for the NNs, to be able to finally proceed with the combined Matrix set-up.

    Risk Criteria and measures have to be specified towards their correlation and  applicability – how to measure specific risks and how to compare them with others. Although a scientific approach and methodology will be vital for the basic structure of such a Matrix it is indispensable to apply the  Matrix to practical situations. 

    References

    [1] Glasserman, Paul: The quest for precision through Value at Risk. In: Pickford, James (Executive Editor): Mastering Risk, Volume 1: Concepts.  Pearson Education Limited (2001) 109-114.

    [2] KonTraG, Bundesgesetzblatt I 1998, 30.04.1998, 786-794.

    [3] Götze, U., Henselmann, K., Mikus, B. (Editors): Risikomanagement. Physica-Verlag (2001).

    [4] Cross, S.S., Harrison, R.F., & Kennedy, R.L. (1995). Introduction to neural networks.  The Lancet, 346,  October 21, 1075-1079.

    [5] Galletly, C.A., Clark, C.R., & McFarlane, A.C. (1996).  Artificial neural networks: A prospective tool for the analysis of psychiatric disorders. Journal of Psychiatry and Neuroscience,  21 (4), 239-247.

    [6] Patterson, D.A. & Cloud, R.N. (2000). The  application of artificial neural networks for outcome prediction in a cohort of severely mentally ill outpatients. Forthcoming in The Journal of  Technology for Human Services.

    [7] Y., Shen, Y., Shu,  L., Wang, Y., Feng, F., Xu, K., Qu, Y., Song, Y., Zhong, Y., Wang, M., & Liu, W. (1996). Artificial neural network to assist psychiatric diagnosis. British  Journal of Psychiatry, 169 (1), 64-67.

    [8] G.D. (1998).  Neural Networks: An Introductory Guide for Social Scientists.  Thousand Oaks, California. Sage.

    [9] Marshall, D.B., & English, D.J. (2000). Neural network  modeling of risk assessment in child protective services. Psychological Methods, 5(1) in press.

    [10] Hartzberg, J., Stanley, J., & Lawrence, M. (1990).  Brainmaker user's guide and reference  manual  (Computer program manual). Sierra Madre, CA: California Scientific Software.

    [11] R.P. (1987). An introduction to computing with neural nets.  IEEE ASSP Magazine,  April, pp. 4-22.

    [12] Weiss, S.M., & Kurlikowski, C.A. (1991).  Computer systems that learn: Classification and prediction methods from statistics, neural nets, machine learning, and expert systems.  San Mateo,  CA: Morgan Kaufmann.

    [13] Gordon, J.S. (1991). (Probability correct classification as a function of increasing decision thresholds). Unpublished raw data.

    [14] Gordon, J.S. (1992).  A  neural network approach to the prediction of violence.  Unpublished doctoral dissertation. Oklahoma State University.

     

www.paxmann.biz