ABSTRACT Title of Thesis: A SYSTEMS MODELING DESIGN UTILIZING AN OBJECT-ORIENTED APPROACH CONCERNING INFORMATION RISK MANAGEMENT Noriaki Suzuki Master of Science in Systems Engineering, 2005 Fall Nelson X. Liu, Assistant Research Scientist, Institute for Systems Research Degree candidate: Degree and year: Thesis directed by: Adopting advanced information technologies within the present broad application fields requires precise security. However, security problems regarding information privacy have occurred frequently over the last 5 years despite the contribution of these technologies.

To respond to the need for securing information privacy, the Information Privacy Law was enacted on April 1, 2005 in Japan. One of the responses to this law enforcement is demanding a higher level of information risk management and search for more effective tools to be used for identity protection and problem-solving. Two examples of these tools include RAPID and IRMP. However, there is no established system-development model for either of these tools. Further developments to improve the RAPID and IRMP remain as new challenges.

In this thesis, a new approach on developing a system security model to be used for information risk management is proposed. To demonstrate this approach, the object-oriented language is used. A SYSTEMS MODELING DESIGN UTILIZING AN OBJECT-ORIENTED APPROACH CONCERNING INFORMATION RISK MANAGEMENT By Noriaki Suzuki Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Master of Science 2005 Fall Advisory Committee: Dr. Nelson X.

Liu, Assistant Research Scientist, Institute for Systems Research Professor Eyad Abed, Director of the Institute for Systems Research Professor Michel Cukier, Assistant Professor, Reliability Engineering © Copyright by Noriaki Suzuki 2005 ACKNOWLEDGEMENTS I would like to express my honest thanks to my advisor, Dr. Nelson X. Liu, for his strong direction and guidance throughout this work. I could not have completed the thesis without his constant advice and help. I would also like to thank my secondary advisor Lee Strickland. He helped me in analyzing current security breaches and investigating the information risk assessment in chapter 3.

I would also like to thank Prof. Eyad Abed and Prof. Michel Cukier for agreeing to serve on my committee. Furthermore, I am grateful to my friends Sivakumar Chellathurai, Suchee Nathan, and Brent Anderson for their help and valuable comments in written form of this thesis. Many thanks to Jonathan Eser, Daniela Villar del Saz, Gokul Samy, and Pampa Mondal for their friendship and support throughout my studies in Maryland; without their support I probably would not be where I am now. Finally, I am grateful to the systems engineering program at the University of Maryland, College Park.

This study has equipped me with the skills I need in order to make a more significant contribution to the world and offer the tools I will need to overcome obstacles I may face in the future. ii TABLE OF CONTENTS 1. INTRODUCTION ........................................................................................................... 1 2. BACKGROUND ............................................................................................................. 7 2-1. Review of Existing Studies...................................................................................... 2-2. Information Risk Management .............................................................................. 10 2-3. Systems Modeling.................................................................................................. 15 2-3-1. Meta Model..................................................................................................... 16 2-3-2. UML................................................................................................................ 18 3. RISK ASSESSMENT USING THE CURRENT SECURITY ISSUES........................ 21 3-1.

Risk Assessment Methodology .............................................................................. 21 3-2. Security Risk Assessment ...................................................................................... 29 3-3. Suggestions for Preventing Security Breach.......................................................... 48 4. STATIC AND DYNAMIC INFORMATION POLICIES.............................................. 51 4-1. Definition ............................................................................................................... 51 4-2.

Static Information Policy ....................................................................................... 57 4-3. Dynamic Information Policy.................................................................................. 60 4-3-1. Sample Dynamic Policy 1 (Dynamic/ Confidentiality, Availability/ Access Control, Intrusion Detection/ [SB-7],[SB-12]).......................................................... 60 4-3-2. Sample Dynamic Policy 2 (Dynamic/ Availability, Accountability/ Intermediate Control, Intrusion Detection/ [SB-12], [SB-13]).................................. 7 5. DEVELOPING THE SECURITY MODEL WITH UML............................................. 81 5-1 Sample System Overview ....................................................................................... 81 5-1-1. System Boundary ............................................................................. ............... 81 5-1-2. Use Case ......................................................................................................... 83 5-1-3. Scenarios ......................................................................................................... 4 5-2. Structural System Model corresponding to the information policies .................... 91 5-2-1. Class Description ............................................................................................ 91 5-2-2. Class Diagram................................................................................................. 92 5-2-3. Object Description .......................................................................................... 94 5-2-4. Object Diagram............................................................................................... 95 5-3.

Behavioral System Model corresponding to the information policies................. 101 5-3-1. Activity Diagram........................................................................................... 102 5-4. Systems Verification ............................................................................................. 111 5-4-1. Test Data ........................................................................................................ 111 5-4-2. Data Analysis ................................................................................................ 113 5-4-3.

Improvements ............................................................................................... 121 5-4-4. Results........................................................................................................... 124 6. CONCLUSIONS.......................................................................................................... 143 7. FUTURE EFFORT ...................................................................................................... 145 iii APPENDIX A: SUMMARY OF SECURITY BREACH ................................................ 46 REFERENCES ............................................................................. ................................... 155 iv LIST OF TABLES Table 1: Relative Cost to Correct Security Defects by Stage........................................................... 3 Table 2: Security Defects by Category ............................................................................................ 3 Table 3: Comparison of the Approach of This Thesis to Other Approaches.................................. 10 Table 4: Terms for Risk Measurement ........................................................................................... 2 Table 5: Probability Levels of an Undesired Event ....................................................................... 25 Table 6: Severity Levels of Undesired Event Consequences......................................................... 25 Table 7: Risk Assessment Matrix................................................................................................... 26 Table 8: Security Levels of Undesired Event for an Asset in Information Risk Assessment......... 26 Table 9: Rating for the Probability of Occurrence ......................................................................... 7 Table 10: Rating for the Security Level ......................................................................................... 28 Table 11: Category Table for Security Breaches............................................................................ 30 Table 12: Each Assessment Rating ................................................................................................ 31 Table 13: Asset Assessment Worksheet ......................................................................................... 4 Table 14: Rating Table for Threat Assessment............................................................................... 36 Table 15: Threat Assessment Worksheet.................................................................... .................... 38 Table 16: Rating Table for Vulnerability Assessment .................................................................... 39 Table 17: Vulnerability Assessment Worksheet ............................................................................. 42 Table 18: Risk Assessment and Countermeasure Options Worksheet ........................................... 5 Table 19: Security Properties ......................................................................................................... 53 Table 20: Access Control Matrix ................................................................................................... 53 Table 21: Classification for A Sample Information Policy ............................................................ 56 Table 22: Classification for NTFS ................................................................................................. 7 Table 23: Classification for Information Policy 1 (Static) ............................................................. 58 Table 24: Classification for Information Policy 2 (Static) ............................................................. 58 Table 25: Rule for each Warning Level for Sample Dynamic Information Policy 1 ..................... 63 Table 26: Statistical Analyzed Max Number of Accesses I ........................................................... 65 Table 27: Statistical Analyzed Max Number of Accesses II .......................................................... 5 Table 28: The Final Estimated Statistical Analyzed Max Number of Accesses............................. 66 Table 29: Rule each Audit Level for Sample Dynamic Information Policy 2 ............................... 71 Table 30: Audit Data 1 for the Sample Dynamic Information 2, Day 1 in the (dy) Period ........... 73 Table 31: Statistical Data 1 for the Sample Dynamic Information 2, Day 1 in the (dy) Period ... 74 Table 32: Audit Data 1 for the Sample Dynamic Information 2, Day 2 in the (dy) Period ........... 74 Table 33: Statistical Data 1 for the Sample Dynamic Information 2, Day 2 in the (dy) Period..... 4 Table 34: Object Confidential Level.............................................................................................. 75 Table 35: Audit Data 2 for the Sample Dynamic Information 2, Day 1 in the (dy) Period ........... 75 Table 36: Statistical Data 2 for the Sample Dynamic Information 2, Day 1 in the (dy) Period..... 75 Table 37: Audit Data 2 for the Sample Dynamic Information 2, Day 2 in the (dy) Period ........... 76 Table 38: Statistical Data 2 for the Sample Dynamic Information 2, Day 2 in the (dy) Period..... 6 Table 39: Summary of the Statistical Variables in the Period ........................................................ 77 Table 40: Procedure Output Statistical Variable in the Period ....................................................... 77 Table 41: The Final Estimated Statistical Analyzed Audit Level................................................... 78 Table 42: Identified Objects and Class Table................................................................................. 92 Table 43: Class Description for the Security System Model ......................................................... 2 Table 44: Object Description for the Security System Model ....................................................... 94 Table 45: Load Access Control Matrix for the Security System Model ........................................ 98 Table 46: Objects and Data for Security Mechanism without Policy for Countermeasure 1 ...... 126 Table 47: Objects and Data for Security Mechanism Static Policy for Countermeasure 1.......... 128 Table 48: Objects and Data for Security Mechanism Dynamic Policy 1..................................... 30 Table 49: Objects and Data for Security Mechanism without Policy for Countermeasure 2 ...... 135 v Table 50: Objects and Data for Security Mechanism Static Policy for Countermeasure 2.......... 138 Table 51: Objects and Data for Security Mechanism Dynamic Policy 2..................................... 139 Table 52: Validation of Information Policies ............................................................................... 141 LIST OF FIGURES Figure 1: Depicting for Information Risk ............................................................................. ......... 2 Figure 2: Overview of Information Risk Management Model ...................................................... 14 Figure 3: Meta Model Architecture................................................................................................ 16 Figure 4: UML 2. 0 Architecture..................................................................................................... 19 Figure 5: Process Flow to Develop the System Model .................................................................. 20 Figure 6: Information Risk Profile................................................................................................. 1 Figure 7: Structure of Workflow for Information Policy Setting ................................................... 29 Figure 8: Graphical Representative of Probability of Risk Occurrence ........................................ 46 Figure 9: Graphical Representative of Information Asset.............................................................. 46 Figure 10: Graphical Representative for Risk Level ..................................................................... 47 Figure 11: Methodical Categorization of Information Security Policy.......................................... 2 Figure 12: Access Control Model .................................................................................................. 53 Figure 13: Protection Rings ........................................................................................................... 54 Figure 14: Audit Data for the Sample Dynamic Information 1, Day 1 in the (dy) Period............. 63 Figure 15: Audit Data for the Sample Dynamic Information 1, Day 2 in the (dy) Period............. 64 Figure 16: Audit Data for the Sample Dynamic Information 1, Day 3 in the (dy) Period............. 4 Figure 17: Cross Table of The Number of Accesses in the (dy) period ......................................... 65 Figure 18: Sample Event Log from Apr 18 to Apr 19 in 2005 ...................................................... 72 Figure 19: The Crucial Event Log on Apr 18, 2005 ...................................................................... 78 Figure 20: The Crucial Event Log on Apr 19, 2005 ...................................................................... 79 Figure 21: Sample Credit Card Online System.............................................................................. 3 Figure 22: Use Case for the Secured System Model ..................................................................... 84 Figure 23: Class Diagram for the Security System Model (Part I) ................................................ 93 Figure 24: Class Diagram for the Security System Model (Part II)............................................... 94 Figure 25: Object Diagram :: UserPC Class .................................................................................. 96 Figure 26: Object Diagram :: UserInf Class .................................................................................. 7 Figure 27: Object Diagram :: ACM Class...................................................................................... 97 Figure 28: Object Diagram :: Dynamic Policy Class..................................................................... 99 Figure 29: Object Diagram for the Security System Model ........................................................ 101 Figure 30: Activity Diagram for User Login (Part I) ................................................................... 102 Figure 31: Activity Diagram for User Login (Part II).................................................................. 03 Figure 32: Activity Diagram for User Access (Part I) ................................................................. 104 Figure 33: Activity Diagram for User Access (Part II) ................................................................ 105 Figure 34: Activity Diagram for Dynamic Policy 1 (Part I) ........................................................ 107 Figure 35: Activity Diagram for Dynamic Policy 1 (Part II) ....................................................... 108 Figure 36: Activity Diagram for Dynamic Policy 2 (Part I) ........................................................ 09 Figure 37: Activity Diagram for Dynamic Policy 2 (Part II) ....................................................... 110 Figure 38: Required Material to Generate Test Data ................................................................... 112 Figure 39: Datasheet of Number of Accesses by User 1, 2, and 3 ............................................... 114 Figure 40: Datasheet of Totaled Number of Accesses each Day ................................................. 114 vi Figure 41: Datasheet of Procedure Process for Dynamic Information Policy 1 .......................... 15 Figure 42: Datasheet of Frequency of Access of User................................................................. 116 Figure 43: Datasheet of Occupancy Rate of User........................................................................ 117 Figure 44: Datasheet of Frequency of Access to an Object ......................................................... 118 Figure 45: Datasheet of Totaled Number of Accesses and Maximum Occupancy each Day ...... 119 Figure 46: Datasheet of Totaled Frequency of Access to an Object ............................................ 19 Figure 47: Datasheet of Procedure Process for Dynamic Information Policy 2 .......................... 120 Figure 48: Audit Level Matrix in Test Data ................................................................................. 120 Figure 49: Datasheet of Number of Accesses on Feb. 12, 2003 .................................................. 121 Figure 50: Datasheet of Procedure Process for User 1 ................................................................ 122 Figure 51: Datasheet of Procedure Process after Improvement................................................... 22 Figure 52: Example of Revision for Maximum Number of Accesses for each Warning Level... 123 Figure 53: Audit Level Matrix in Test Data after Revision.......................................................... 124 Figure 54: SCD of Security Mechanism without Policy for Countermeasure 1.......................... 125 Figure 55: SCD of Result without Policy for Countermeasure 1................................................. 126 Figure 56: SCD of Security Mechanism with Static Policy 1...................................................... 27 Figure 57: SCD of Result with Static Policy 1 ............................................................................ 129 Figure 58: SCD of Security Mechanism with Dynamic Policy 1 ................................................ 130 Figure 59: SCD of Result with Dynamic Policy 1....................................................................... 133 Figure 60: Datasheet of Objects and Data from Test Data........................................................... 134 Figure 61: SCD of Security Mechanism without Policy for Countermeasure 2.......................... 35 Figure 62: SCD of Result without Policy for Countermeasure 2................................................. 136 Figure 63: SCD of Security Mechanism with Static Policy 2...................................................... 137 Figure 64: SCD of Result with Static Policy 2 ............................................................................ 138 Figure 65: SCD of Security Mechanism with Dynamic Policy 2 ................................................ 139 Figure 66: SCD of Result with Dynamic Policy for Countermeasure 2 ...................................... 140 ii 1. Introduction With the advent of the digital-age, a coherent information policy is required to support the rapid flow of information. Internet technologies have changed the face of both business and personal interaction. Introducing a system to the Internet results in immediate world scale network exposure. Due to this exposure, information trade is easy and fast. However, this exposure also results in security breaches regarding information leak, which occur daily throughout the world. In Japan, the total number of such problems last year amounted to 2297 instances.

Just from January to July of 2005, the number of Internet privacy losses already reached 7009 cases 1 . To address this vulnerability, the Information Privacy Law was enacted on April 1, 2005 in Japan2. After the law was enacted, an onslaught of system security products were released with many more on the way3; however, the products will not be effective unless both system developers and users have an understanding about what kind of information policy that the system requires and how they should deal with the information policy in development security systems4.

According to the article “The Way to Develop the Secure Information The present state of privacy information breach problem http://www. ahnlab. co. jp/virusinfo/security_view. asp? news_gu=03&seq=86&pageNo=4 Ministry of Internal Affair and Communication information privacy law site, http://www. soumu. go. jp/gyoukan/kanri/kenkyu. htm Systemwalker Desktop keeper, Fujitsu http://systemwalker. fujitsu. com/jp/desktop_keeper/ / IPLOCKS Information Risk Management Platform, IPLOCKS, 2 pages http://www. iplocks. com/images/newsarticles/3customers_final_052705. df Overreacted t o the information privacy law, Yoshiro Tabuchi, NIKKEI BP in Japan, July. 5 2005, http://nikkeibp. jp/sj2005/column/c/01/index. html? cd=column_adw 4 3 2 1 1 Risk Management System”5, based on the analysis of 61 cases of privacy information security breach problems published on the Net Security site6, less than 10% of problems are actually caused by hacker attacks. Nearly all (90%) problems originate at the design stage (41%) and maintenance (52. 5%) stage. This demonstrates the lack of regard for information risk management at the design and maintenance stage.

The same article concludes that the problems caused at the design stage are due to a lack of understanding on the part of the systems engineer; and security risks compounded as systems near completion. The same poor understanding of security and information risk management on the part of operators is responsible for delayed responses during maintenance. In addition, the report - “Information security: why the future belongs to the quants”7 indicates that developers should work on quality early in the process, where cost is lower as Table 1 shows. A cost to correct at the design stage is the lowest among the all stages.

On the other hand, a cost at the maintenance stage is the highest; it is almost more 100 times than the cost at the design stage. In addition, security defects also tend to occur at certain design stages more than other stages as depicted in Table 2. The way to develop the secure information risk management system, D add ninth Co. , Ltd. Oct 1 2002, http://www. dadd9. com/tech/sec4manager. html 6 5 NetSecurity, Livin' on the EDGE Co. , Ltd. & Vagabond Co. ,Ltd. , , https://www. netsecurity. ne. jp/ Information security: why the future belongs to the quants, Security & Privacy Magazine, IEEE, July-Aug. 003, Volume 1, Issue 4, Page 24 –32 7 2 STAGE Design Implementation Testing Maintenance RELATIVE COST 1. 0 6. 5 15. 0 100. 0 Table 1: Relative Cost to Correct Security Defects by Stage8 CATEGORY Administrative interfaces Authentication/access control Configuration management Cryptographic algorithms Information gathering Input validation Parameter manipulation Sensitive data handling Session management Total ENGAGEMENTS WHERE OBSERVED 31% 62% 42% 33% 47% 71% 33% 33% 40% 45% DESIGN RELATED 57% 89% 41% 93% 51% 50% 81% 70% 94% 70% SERIOUS DESIGN FLAWS* 36% 64% 16% 61% 20% 32% 73% 41% 79% 47%

Table 2: Security Defects by Category9 The categories printed in bold in the above table relate to information leak problems. As the above evidences show, it is crucial to be concerned about security issues at the design stage in developing systems. To overcome the abovementioned problems, several information risk management tools such as RAPID10 and IRSP11 have been developed. Information security: why the future belongs to the quants, Security & Privacy Magazine, IEEE, July-Aug. 2003, Volume 1, Issue 4, Page 26 Information security: why the future belongs to the quants, Security & Privacy Magazine, IEEE, July-Aug. 003, Volume 1, Issue 4, Page 29 10 9 8 Information Security Program Development Using RAPID, http://www. nmi. net/rapid. html 11 Information Risk Management Program, CSC CyberCare, 5 pages http://www. csc. com/industries/government/knowledgelibrary/uploads/807_1. pdf 3 RAPID is used for defining necessary business processes and developing guidelines to develop a security system. IRSP provides information risk management programs to support and improve existing client security systems. However, these tools identify only the management steps needed to enforce the existing security system and do not expand on the methodology and models ecessary for developing new systems. In developing new systems, which require a significant information policy, the methodology, approach, system modeling, and system model verification must be clearly outlined. This outline significantly helps a systems engineer to work on quality at the early stage in developing systems. This thesis focuses on an information risk assessment, method of developing information policy, modeling system and system model verification, all of which must be considered in the design stage.

Newly developed systems must be forward compatible with new technologies for threats that may occur during the maintenance stage. It is required for the design to have the following: Suitable for the addition of various security components Re-useable security components for various systems Provide common understanding among system developers Object-oriented design is the most suitable design methodology to overcome the aforementioned requirements, and as the premiere meta-modeling language for analysis and design in the software engineering community, the Unified Modeling Language (UML)12 will be used.

For example, modeling the systems with UML creates a common 12 OMG (Object Management Group) official site, Unified Modeling Language, http://www. uml. org/ 4 understanding for both developers and user domain experts. A better understanding of how the user systems function facilitates the detection of security problems early in system development. The following are benefits of using the system developing approach and methodology involving the information risk management model discussed in this thesis. Prioritizes security risk solutions: Information risk ssessment helps developers to realize highly required security issue and provide risk solutions. Secures the privacy information: A well-grounded countermeasure and its security solution based on a risk assessment make systems to be secure. Reduces errors: Information risk management, modeling systems, systems model verification at the design stage reduce security defects efficiently and effectively. Facilitates updates for new security threats: A system modeling with UML facilitates developers to update and reuse components in systems, and provides high compatibility to other systems.

Provides understandable system design documents: Information risk management with simple concept and design document using UML provides well understanding to any developers and stakeholders. In this thesis, information risk management is introduced with current security studies and a new model, which is developed herein. Next, following the presentation of this new model for information risk management, 20 current security breach issues will be analyzed and assessed using the risk management assessment method. Then, three worst-case security breach issues will be addressed.

Third, information policies will be developed using countermeasures in response to the worst-case security breaches, and then security mechanisms based on these information policies will be shown. Fourth, system modeling will be performed with UML. Fifth, security mechanisms based on 5 dynamic information policies developed in this thesis will be verified. In addition, validation of the system model is shown. A state chart diagram in a case without policy, with static policy, and with dynamic policy is developed. The cases are then validated with UPPALL13.

UPPALL is an integrated tool environment for modeling, validating and verifying real-time systems modeled as networks of timed automata. Finally, this thesis will conclude by demonstrating how to develop the secured system model discussed in this thesis. This thesis contributes to systems engineering by providing a framework for handling security issues faced by enterprises managing secure information. 13 UPPAAL homepage, http://www. uppaal. com/ 6 2. Background 2-1. Review of Existing Studies In this report, outlining a methodology and modeling for developing systems to prevent information leaks are the main themes.

Such an effort is significant for systems engineers in developing systems and similar studies are available. In this section, three similar existing studies are reviewed. Their approaches are analyzed and an outstanding methodology with model-based risk assessment is used in developing systems. 1. Information Flow Analysis of Component-Structured Applications, Peter Herrmann, 200114 The diversity and complexity of information flow between components pose the threat of leaking information. Security analysis must be performed in order to provide suitable security solutions.

Systems are audited for vulnerabilities, threats, and risks. Based on the audit, effective safeguards are selected, designed, and configured. However, since information flow analysis tends to be expensive and error-prone, object oriented security analysis and modeling is utilized. It employs a UML-based object-oriented modeling techniques and graph rewriting in order to make the analysis understandable and to assure its accuracy even for large systems. Information flow is modeled based on Myers’ and 14 Computer Security Applications Conference, ACSAC 2001 Proceedings 17th Annual 10-14, Dec. 2001, Page 45 – 54 Liskov’s15 decentralized label model combining label-based read access policy models and declassification of information with static analysis. 2. Developing Secure Networked Web-Based Systems Using Model-based Risk Assessment and UMLsec, Siv Hilde Houmb / Jan J? urjens, 200316 Despite a growing awareness of security issues in networked computing systems, most development processes used today still do not take security aspects into account. This paper shows a process for developing secure networked systems based on CORAS framework1718, whose concept is a model based risk assessment using UMLsec.

UMLsec is an extension of the Unified Modeling Language (UML) for secures systems development. Enterprise information such as security policies, business goals, policies and processes are supported through activities in a model-based integrated development process. Security requirements at a more technical level can be expressed using UMLsec. Additionally, a support-tool for a mechanical analysis of such requirements is provided. 15 Decentralized Model for Information Control Flow In Proc. 16th ACM Symposium on Operating Systems Principles, A. C. Myers and B.

Liskov. A, Saint-Malo, France, 1997 Software Engineering Conference, Tenth Asia-Pacific 2003, 2003, Page 488 - 497 16 17 Towards a UML pro? le for model-based risk assessment, S. -H. Houmb, F. den Braber, M. S. Lund, and K. Stolen, In J? urjens et al. Business Component-Based Software Engineering, chapter Modelbased Risk Assessment in a Component-Based Software Engineering Process, K. Stolen, F. den Braber, T. Dimitrakos, R. Fredriksen, B. Gran, S. Houmb, Y. Stamatiou, and J. Aagedal, The CORAS Approach to Identify Security Risks, Kluwer, 2002, Pages 189–207 8 8 3. Model-based Risk Assessment to Improve Enterprise Security, Jan Oyvind Aagedal / Folker den Braber / Theo Dimitrakos§ / Bjorn Axel Gran / Dimitris Raptis‡ / Ketil Stolen, 200219 This paper attempts to define the required models for a model-based approach to risk assessment. CORAS is applied to provide methods and tools for precise, unambiguous, and efficient risk assessment of security critical systems since traditional risk assessment is performed without any formal description of the target of evaluation or results of the risk assessment.

CORAS provides a set of models to describe the target of assessment at the right level of abstraction, and medium for communication between different groups of stakeholders involved in a risk assessment. In one step of the risk treatment, a strengthening of the security requirements is suggested to handle identified security problems. In addition, many risk assessment methodologies are presented, such as HazOP20 and FMEA21. HazOP is applied to address security threats involved in a system, and FMEA is applied to identify potential failure in the system’s structure.

All components in the system’s structure are expressed by UML. Many approaches are taken into account in developing systems. The following table is a brief comparison of the approach in this thesis to the three aforementioned approaches. 19 Model-based Risk Assessment to Improve Enterprise Security, Enterprise Distributed Object Computing Conference, 2002. EDOC '02. Proceedings. Sixth International, Sept. 2002, Page 51 – 62 Security Assessments of Safety Critical Systems Using HAZOPs, R. Winther, O. A. Johnsen, and B. A.

Gran, 20th International Conference on Computer Safety, Reliability and Security SAFECOMP 2001, Hungary, 2001 FMEA Risk Assessment, http://www. tangram. co. uk/TI-HSE-FMEA-Risk_Assessment. html 20 21 9 Title # 1 2 3 This Thesis Survey No No No Yes Risk Analysis Common Criteria22 CORAS framework Security Solution Information Flow UMLsec Design UML Tool Java Beans-based components MDR23 UML CASE tool Poseidon24 (Does not apply) N/A UPPAAL26 CORAS HazOp, FMEA Risk Assessment Information policy extending with DOD25 and its procedure standard (new) (new) UML UML

Table 3: Comparison of the Approach of This Thesis to Other Approaches All approaches are very useful and powerful in developing secure systems; however, the approach in this thesis may be well suited and used widely and easily for recent systems since the concept of this approach is very simple and its security mechanisms are based on countermeasures from current information leak problems. 2-2. Information Risk Management What is information risk management? According to an article 27 regarding information risk management provided by the Nomura Research Institution 28 , 22

Common Criteria for Information Technology Security Evaluation, I International Standard ISO/IEC, SO/IEC, 1998 Meta-Data Repository, MDR homepage, http://mdr. netbeans. org UML CASE tool Poseidon, Gentleware homepage, http://www. gentleware. com Department of Defense home page, http://www. defenselink. mil/ UPPAAL homepage, http://www. uppaal. com/ 23 24 25 26 27 Understanding Information Risk Management, Nomura Research Institution, 2002, 95 pages http://www. nri. co. jp/opinion/chitekishisan/2002/pdf/cs20020910. pdf Nomura Research Institution home page, http://www. nri. co. jp/english/index. tml 28 10 information risk management are those policies which reduce risk inherent in information processing. As enterprises invest in information-oriented systems, databases such as customer information databases have resulted in enterprises maintaining and using greater quantities of information. Poor information management may result in information leaks and hacker attacks, resulting in considerable loss and damage to the enterprises. For example, the information of 900,000 clients of Yahoo! BB in Softbank, which is one of the largest high-speed Internet connection services, was leaked by an x-employee’s misuse of its database system29. The leaked information almost spread to the Internet. The company paid $10 to each client as a self-imposed penalty for this lapse in security. Total financial losses reached $9 million. While insider culprits must pay for their crimes, the responsible company must create an environment where such events are defended against. The following figure represents information risk. The upper left graph in the figure shows the probability of risk occurrence in each given process; the bottom left graph shows the magnitude of assets involved in the information for each given process.

The information risk level is determined by the combination of the probability and assets. For instance, a combination of high probability of risk exposure and high asset value will be the highest risk level; on the other hand, a combination of low probability of risk exposure and low value of assets will be the lowest risk level. In the case of the aforementioned security breach, the probability of risk exposure is once every 3 years; this probability is low. However, the value of information assets is extremely high. Thus, the risk level will be middle or high.

The risk level may be between unacceptable risk and 29 The Japan Times; Softbank leak extortionist won't serve time; July 10, 2004; http://search. japantimes. co. jp/print/news/nn07-2004/nn20040710a4. htm 11 undesirable risk. Probability of Risk Occurence Risk Level Unacceptable Risk (reduce risk through countermeasures) Risk Level Standard Business Process Treated Information Asset Business Process Business Process Figure 1: Depicting for Information Risk30 In response to emergent security problems, many government agencies in Japan and the United State now require information security management certification.

In Japan, ISMS (Information Security Management System) was issued last year31. It has become a standard certification for information risk management. In the United State, Congress passed the Federal Information Security Management Act of 2002 (FISMA)32, which provides the overall information risk management framework for ensuring the effectiveness of information security controls that support federal operations and assets33. 30 Understanding Information Risk Management, Nomura Research Institution, 2002, Page 94, http://www. nri. co. jp/opinion/chitekishisan/2002/pdf/cs20020910. df Information Security Management System (ISMS) home page, http://www. isms. jipdec. jp/ 31 32 Federal Information Security Management Act of 2002 (Title III of E-Gov), Computer Security Resource Center, http://csrc. nist. gov/policies/ Improving Oversight of Access to Federal Systems and Data by Contractors can Reduce Risk, Wanja Eric Naef, GAO, April 2005, 28 pages 33 12 Information risk management is required in developing systems. It has become the center of public attention in the last few years. As a result of this demand, many information risk management tools have been introduced.

Among these tools, RAPID and IRSP are notable. RAPID is useful for defining necessary business processes and developing guidelines to improve existing security systems. It will fit security systems into existing systems. The processes are 1) risk assessment, 2) security problem identification and awareness review, 3) security program creation and support. IRSP provides an information risk management program to support and improve client security systems. The program defines certain client security policies and provides static and dynamic protection. Protection schemes are as follows:

Static Protection A rule and definition of security standards, security architecture, security service, recovery interface, and other additional security needs based on countermeasures from personal, physical, administrative, communications, and technology. Dynamic Protection Security protection program involving vulnerability alert processes, vulnerability assessment processes, monitoring services, and anti-virus programs plan based on information security best practices. After analyzing the both types of protection, the program shows certain security compliance standards and specifications based on the static protection.

In addition, the 13 program assembles best protection practices and creates the proper program based on the dynamic protection. However, these tools do not provide any strict methodology or approach to develop new systems with information risk management policies. Certain system models or system development methodologies and approaches must be formally provided to remedy this deficiency. This deficiency in current information risk management tools is the motivation for this thesis. In conjunction with the concepts and ideas from the abovementioned tools and hree existing methodologies, a new system development approach and methodology for information risk management will be introduced in this thesis. An overview of the system development model is as follows. Chapter 3 Risk Assessment Security Mechanism 1 Security Mechanism 2 Unacceptable Undesireable Risk Risk Acceptable with review by management Acceptable without review Security Mechanism m Determine Target Risk to Solve Information Policy Procedure Information Policy 1 Chapter 4 Detemine Information Policy Information Policy n Information Policy 2

Figure 2: Overview of Information Risk Management Model 14 Risk management is used to identify risks involved in security breaches and prioritize security solutions for the risks in any field. This will be described in detail in chapter 3. Information policy consists of the rules developed for the target system in order to reduce or prevent the risk. Security mechanism is the procedures used to accomplish the information policy. Security mechanisms will be invoked in the subject system. This process will be shown in chapter 4. In this thesis, some examples following this cycle will be shown in detail. -3. Systems Modeling Modeling is a powerful technique to develop a system effectively and efficiently, and it offers many benefits to any participant of system development such as stakeholders, system developers, and users. According to Mark Austin’s lecture notes for the University of Maryland systems engineering program34, the benefit of using modeling are as follows. Assistance in Communication Assistance in Coordination of Activities Ease of Manipulation Efficient Trial-and-Error Experiments Reduction of Development Time Reduced Cost Risk Management 34

ENSE 622 Lecture notes, Mark Austin, University of Maryland, 2004, Page 78 – 79 15 The system model provides experiments, rules, and useful information for designing, developing, and implementing the system. By applying the model, cost, time, and risk will be dramatically reduced. For this reason, the modeling process will be the main concern regarding information risk management in systems engineering. 2-3-1. Meta Model What is the system model? How is it developed? Meta Model Architecture will be introduced in order to answer these questions. Meta modeling is generally described using a four-layer architecture.

These layers represent different levels of data and Meta-meta model meta-meta-metadata meta-meta-meta objects Metamodel meta-metadata meta-meta objects described Package M3 described MOF Model Class Association M2 described UML, IDL, XML Attribute Object Method Model metadata meta objects described M1 Application Model Teacher person Student Information data objects M0 Data Modeled Teacher Jose Milton Nori Mari Student Jon Carn metadata. Figure 1 shows an example of the layers used for modeling a target system. Figure 3: Meta Model Architecture35 35

Using Metamodels to Promote Data Integration in an e-Government Application Scenario, Adriana Figueiredo, Aqueo Kamada, IEEE, 2003, Page 4 16 The four layers are: Information: The information layer refers to actual instances of information. For example, there are two instances of data representing “Jose” and “Milton” as teachers, and four instances of data representing “Nori”, “Jon”, “Mari”, and “Carn”. Model: The model layer (also known as the metadata layer) defines the information layer, describing the format and semantics of the objects, and the relationship among the objects.

For example, the metadata specifies the “Person” class, and its instances, which are “Teacher” and “Student”. Relationships between objects are defined such as “Teach (Teacher Student)” and “Learn (Student Teacher)”. Metamodel: The metamodel layer (also known as the meta-metadata layer) defines the model layer, describing the structure and semantics of the model. For example, the meta-metadata specifies a system design that describes its structure and data-flow. The metamodel can also be thought of as a modeling language used to describe different kinds of systems.

Meta-metamodel: The meta-metamodel layer defines the metamodel layer, describing the structure and semantics of the meta-metadata. It is the modeling language that is used to define different kinds of metamodels. Typically, the metametamodel is defined by the system that supports the metamodeling environment. This thesis will focus on developing the M1 layer, which is a model involving meta-data and meta-objects to develop the system model targeting secured systems with information policy to prevent threats of security breach. 17 2-3-2. UML

UML is the one of standard metadata models and modeling language. A wide variety of object-modeling methodologies were developed during the 1980s, such as OMT, Booch, and OOSE. Although these modeling methodologies were similar, the language and notations used to represent them were different. Moreover, the visual modeling tools that implemented these modeling methodologies were not interoperable, and UML quickly become standard modeling language; it lets the modeling take higher level of abstraction so that the model can be updated easily and re-used for other systems.

UML is a standardized modeling language consisting of an integrated set of diagrams, developed to help system and software developers accomplish the following tasks36: Specification Visualization Architecture design Construction Simulation and Testing Documentation UML was originally developed with the idea of promoting communication and productivity among the developers of object-oriented systems. Currently, all of UML is updated for UML2. 037. UML2. 0 has resolved many of the shortcomings in the previous version UML, such as lack of diagram interchange capability, inadequate semantics 6 Excerpted UML 2 for Dummies, Michael Jesse Chonoles, James A. Schardt, July 2, 2003, Page 14,15 UML 2. 0, The Current Official Version, http://www. uml. org/#Articles 37 18 definition and alignment with MOF. UML2. 0 has the following features38: Improved semantics in the class diagrams Support for large systems and business modeling Diagram interchange capabilities Aligned with MOF (Meta Object Facility) and MDA (Model Driven Architecture) UML2. 0 architecture39 is as follows. Diagram Structural Diagram Object Diagram Behavioral Diagram

Activity Diagram Class Diagram Use-Case Diagram State-Machine Diagram Component Diagram Interaction Diagram Protocol State Machines Package Diagram InteractionOverview Diagram Sequence Diagram Timing Diagram Deployment Diagram Composite Structure Diagram Communication Diagram Figure 4: UML 2. 0 Architecture UML 2. 0 will be used since this meta-language is the most extensible and compatible with any system. 38 Excerpted UML 2. 0 in a Nutshell, Dan Pilone, Neil Pitman, Page 10 – 12 Excerpted UML 2. 0 in a Nutshell, Dan Pilone, Neil Pitman, Page 19 39 9 The system model development process flow using UML is presented as follows. Outline Requirements Develop Models Verification and Validation Figure 5: Process Flow to Develop the System Model First, outline requirements and rules for the system to prevent security breach are described. Second, the system model with usage of UML is developed. UML is the most powerful meta-language to design a system model. The language is understandable, easy to handle, and re-useable for other similar system models. Finally, the system model should be verified.

Many verification technologies have been presented recently. UPPALL is one of the powerful verification tools for a system. One scenario from the design will be selected, and it will be verified using UPPALL. Completing verification for the system model will not be performed since the purpose of this thesis is to introduce the system model development process concerning information risk management, not to complete the development of the system. Some sample designs and verifications of the particular scenario shown through this thesis will suffice.

The focus here will be on analyzing current security breaches and risk assessment, and developing information policies for preventing unacceptable security breaches. Completing the system development will be taken into account for future work. 20 3. Risk Assessment using the Current Security Issues Risk assessment is one of the main components of the information risk management model. In this chapter, common risk assessment concepts will be used for the model, and then improved to be suited for information risk management in this thesis.

Finally, the risk level of current security breaches will be determined and security breaches involving unacceptable risk will be addressed; in addition, some suggestions for preventing security breaches will be presented. 3-1. Risk Assessment Methodology Threat Asset Risk Vulnerability Adjusted Risk Figure 6: Information Risk Profile40 How is information risk levels measured for each security breach? How is the risk assessed? According to the “The Executive Guide to Information Security”41, the risk 40

Excerpted from The Executive Guide to Information Security, Mark Egan, Symantec Press, Nov 2004, figure 5-1, Page 109 The Executive Guide to Information Security, Mark Egan, Symantec Press, Nov 2004, Page 104 – 110 41 21 level is the set of assets in the organization and system, threat to the asset, and organization or system vulnerability. In this book, the risk level is assessed using the summary matrix involving a brief description of risk assessment measurements, which are the asset, threats, and vulnerability. The assessment is deployed for each set of the three risk assessment measurements.

However, it is not a quantitative method; this method results in imprecise assessment. The method should be more quantitative and provide more precise assessment so as to apply the information risk model in systems engineering. As the figure on the previous page shows, each risk assessment measurement may have certain assessment value determined by the system developer and stakeholder; the risk is a function of assets, threats, and vulnerabilities. The threat of the security problem, the vulnerability involved in the system and organization, and the assets in the system and organization should be assessed as opposed to only described.

The risk assessment process should be analytical since the information risk management must be a systematic process by which an organization identifies, reduces, and controls its potential risks and losses. At this point, the definition42 of each risk measurement may be shown in the following table. The capacity and intention of an adversary to undertake actions that is detrimental to an organization’s interests. It cannot be controlled by the owner or user. The threat may be encouraged by vulnerability in an asset or discouraged by an owner’s countermeasures.

Any weakness in an asset or countermeasure that can be exploited by an adversary or competitor to cause damage to an organization’s interests. Anything of value (people, information, hardware, software, facilities, reputation, activities, and operations). The more critical the asset is to an organization accomplishing its mission, the greater the effect of its damage or destruction. Threat Vulnerability Asset Table 4: Terms for Risk Measurement 42 National Infrastructure Protection Center; Risk Management: An Essential Guide to Protecting Critical Assets; November 2002, Page 8 – 9 22

A new risk assessment process to suit information risk management in systems engineering will be shown in this thesis. Each risk assessment measurement is determined as follows: - The asset assessment: The magnitude and effect of the potential loss in systems and organization (What is the likely effect if an identified asset when it is lost or harmed by one of the identified unwanted events? ) - The threat assessment: The probability of loss in systems and organization (How likely is it that an adversary can and will attack those identified assets? ) - The vulnerability assessment: The magnitude of the exploitable situations. What are the most likely vulnerabilities that the adversary will use to target the identified assets? ) Developers, including systems engineers, analysts, and security managers should identify and evaluate the value for each risk assessment measurement. The magnitude is measured by verbal ratings such as high, middle, and low. The risk assessment steps are shown here: Step 1. Asset Assessment: Identify and focus confidential information involved in organization and system process. The assets include customer information, business and technology know-how, government secret information, and home security information.

For each individual asset, identify undesirable events and the effect that the loss, damage, or destruction of that asset would have on the organization and system process. Step 2. Threat Assessment: Focus on the adversaries or events that can affect the identified assets. Common types of 23 adversaries include criminals, business competitors, hackers, and foreign intelligence services. Certain natural disasters and accidents are taken into account even though they are not intentional. Step 3. Vulnerability Assessment: Identify and characterize vulnerabilities related to specific assets or undesirable events.

Look for exploitable situations created by lack of adequate security, personal behavior, lack of information management, maltreated privilege documents, and insufficient security procedures. Typical vulnerabilities include the absence of guards, poor access controls, lack of stringent process and software, and unscreened visitors in secure areas. Step 4. Risk Assessment: Combine and evaluate the former assessments in order to give a complete picture of the risk to an asset of confidential information in organization and system process.

The risk is assessed in terms of how each of these ratings (high, middle, low) interacts to arrive at a level of risk for each asset. The terms used in the rating may be imprecise. In situations where more precision is desired, a numerical rating on a 1 to 10 scale can be used. The numerical scale is easier for systems analysts and developers to replicate and combine in an assessment with other scales. How each risk assessment is evaluated has already been presented. For the next procedure, risk level for an asset will be required.

How can the risk level be assessed? For 24 this question, the DOD 43 (Department of Defense) standard definitions 44 for the probability that an undesired event will occur and the severity level are used since the definitions have been adapted for many companies; moreover, the definition is the United State government standard. It may be required for any government information systems. The definition is shown in the following table. Probability Level A: Frequent B: Probable C: Occasional D: Remote E: Improbable Specific Event Likely to occur frequently Will occur everal times Likely to occur sometime Unlikely but possible to occur So unlikely it can be assumed occurrence may not be experienced Table 5: Probability Levels of an Undesired Event Severity Level I: Catastrophic II: Critical III: Marginal IV: Negligible Characteristics Death, system loss or severe environment damage Severe injury, severe occupational illness, major system or environment damage Minor injury, minor occupational illness, or minor system or environmental damage Less than minor injury, occupational illness, or less than minor system or environmental damage

Table 6: Severity Levels of Undesired Event Consequences This process results in a matrix that pairs and ranks the most important assets with the threat scenarios most likely to occur. The risk level will be determined by the following matrix on the next page. 43 Department of Defense home page, http://www. defenselink. mil/ 44 Combating Terrorism Threat and Risk Assessment Can Help Prioritize and Target Program Investments, GAO. April 1998, Page 7 25 Probability of occurrence A. Frequent B. Probable C. Occasional D.

Remote E. Improbable Severity level I. Catastrophic IA II. Critical II A II B II C II D II E III. Marginal III A III B III C III D III E IV. Negligible IV A IV B IV C IV D IV E IB IC ID IE Risk Level 1: Unacceptable (reduce risk through countermeasures) Risk Level 2: Undesirable (management decision required) Risk Level 3: Acceptable with review by management Risk Level 4: Acceptable without review Table 7: Risk Assessment Matrix45 This is the risk assessment definition of DOD widely used for many companies.

The definition should be modified to suit the information risk assessment. The probability of occurrence is useful for the information risk assessment as well. However, the definition of severity level should be modified as follows since the information risk assessment deals only with the information risk such as leaking confidential information and privilege documents, and misuse of technical know-how and home security information.

Security Level I: Catastrophic II: Critical III: Marginal IV: Negligible Characteristics Enormous number of secret information, severe potential to misuse and result in severe environment damage Secret information for particular area and fields, high potential to misuse for only limited area, major system or environment damage Confidential information, low potential to misuse, mi minor system or environmental damage Less than minor and unclassified information injury, less than minor system or environmental damage Table 8: Security Levels of Undesired Event for an Asset in Information Risk Assessment 5 Combating Terrorism Threat and Risk Assessment Can Help Prioritize and Target Program Investments, GAO. April 1998, Page 8 26 To asset information risk management, security level will be used instead of severity level. The risk assessment matrix can be used for the information risk assessment since it has been accepted by many companies; moreover, the matrix is still useful for the information risk assessment. The probability of the unwanted event occurrence clearly increases with increasing threat and increasing vulnerability.

In this thesis, the simple formula for the probability over a given time interval is: Threat * Vulnerability Each assessment measurement of the threat and vulnerability is shown by a numerical rating (1 to 10). The threat and vulnerability rating will be shown in the section of each assessment. The following matrix is used to determine the probability of the unwanted event occurrence with the numerical rating. The probability of occurrence A. Frequent B. Probable C. Occasional D. Remote E. Improbable Numerical rating for threat and vulnerability 81 or more 61 – 80 41 – 60 21 – 40 20 or less

Table 9: Rating for the Probability of Occurrence A security level rating corresponds to an asset raging for the confidential information in organization and system process based on the following matrix. 27 Security Level I: Catastrophic II: Critical III: Marginal IV: Negligible Numerical rating for asset 10 7–9 4–6 1–3 Table 10: Rating for the Security Level Step 5. Identification of Countermeasure Options: Provide the risk acceptance authority with countermeasures, or group of countermeasures, which will lower the overall risk to the asset at an acceptable level.

By evaluating the effectiveness of possible countermeasures against specific adversaries, the systems engineer can determine the most cost-effective options. In presenting countermeasures to the risk acceptance authority, the systems engineer or security analyst should provide at least two countermeasure packages as options. Each option should also include the expected costs and amount of risk that the decision-maker would accept by selecting a particular option. The graphical representation for the information risk assessment is shown as follows. 28 Information Policy Information Risk Assessment

Countermeasure options Information Risk Assessment Matrix Asset Assessment Threat Assessment Vulnerability Assessment Figure 7: Structure of Workflow for Information Policy Setting In this report, information policy will be used for the security requirement of the case study. The information policy will be formatted based on assessment of asset, threat, and vulnera