Hallgatói irodalomfeldolgozás
A tárgy félévközi követelménye egy, a tárgy anyagához kapcsolódó (az oktatók által javasolt, vagy a hallgató által hozott és az oktatók által elfogadott) szakmai publikáció önálló, otthoni feldolgozása és egy rövid hallgatói előadás keretében való bemutatása.
Ehhez az oktatók által ajánlott cikkek listája és rövid kivonata az alábbi táblázatban található. A kiválasztott cikket el lehet kérni bármelyik oktatótól vagy írjatok emailt az alábbi címre: farkasr@mit.bme.hu
2020
1. |
Sher Afzal Khan, Nazir Ahmad Zafar, Farooq Ahmad, Saeed Islam: Extending Petri net to reduce control strategies of railway interlocking system |
Abstract: In our previous articles we gave step by step refinement process towards the development of safety properties of moving block interlocking system (MBRIS). The refinement process started from abstraction to fuzzy based safety properties using Z and then fuzzy multi agent specification language. However, one dimensional control of train passing through a switch and level crossing were not discussed. This paper reduces the existing two dimensional controls along the switch and level crossing to one dimensional for shifting it to a train only. For example, in the existing model the train movement along components switches and level crossings depends on both the train and components control. Whereas, in one dimensional control train is the only authority to control a switch and level crossing required for its desired operation. For this reduction, concurrent and mobile agent concepts are required. Therefore, we integrate mobile agent concepts with Petri nets to develop the mobile Petri net (MPN) a new class of PNs. This supports both mobility and concurrency. Further, we prove that the collection of different MPNs in a connected network is a PN. This proof allowed us to use the properties of PN to verify the system. Finally, we use MPN to model the safety properties of MBRIS along the switch and level crossing. This provides one dimensional control to a train along a switch and level crossing which increases the safety of the railway interlocking system. Moreover, we use reachability graph (RG) to verify the switch and level crossing models. |
|
2. |
Pramod Kumar, Lalit Kumar Singh, Nilesh Chaudhari, Chiranjeev Kumar: Availability analysis of safety-critical and control systems of NPP using stochastic modeling |
Abstract: Non-functional requirements are essentially important and play vital role for applications ranging from safety-critical systems (SCS) to simple gaming applications to ensure their quality. SCS demands not only for safe and reliable systems but systems those remain safe and available while under attacks. Availability analysis approaches include, but are not limited to cluster technique, Markov based chain models, Reliability Block Diagrams (RBD), Fault Tree Analysis (FTA) and Flow Network. The classical approaches fail to account for the comprehensive and accurate analysis of the diverse characteristics such as temporal behavior of systems, uncertainty in system behavior and failure data, functional dependencies among components and multiple failure modes for components or systems. This paper presents a novel approach for the availability analysis of a Digital Feed Water Control System (DFWCS) of nuclear power plant, which considers the maintenance and repair of the main-steam safety valves. The approach will be useful when no operational profile data is available for that. The system has been modeled using Stochastic Petri Net capturing all the system requirements along with the partial failures of its subsystems and common-cause failures and analyzed using TimeNet tool. The proposed methodology proves to be efficient and overcomes the limitations of the traditional approaches and the Markov model approach as it computes the state-transition probabilities, rather than assuming or qualitatively assessing it. | |
3. |
Rui Wanga, Jérémie Guiocheta, Gilles Moteta, Walter Schönb: Modelling confidence in railway safety case |
Abstract: Railway standard EN50129 clarifies the safety acceptance conditions of safety-related electronic systems for signalling. It requires using a structured argumentation, named Safety Case, to present the fulfilment of these conditions. As guidance for building the Safety Case, this standard provides the structure of high-level safety objectives and the recommendations of development techniques according to different Safety Integrity Levels (SIL). Nevertheless, the rationale connecting these techniques to the high-level safety objectives is not explicit. The proposed techniques stem from experts belief in the effectiveness and efficiency of these techniques to achieve the underlying safety objectives. So, how should one formalize and assess this belief? And as a result how much confidence can we have in the safety of railway systems when these standards are used? To deal with these questions, the paper successively addresses two aspects: (1) making explicit the safety assurance rationale by modelling the Safety Case with GSN (Goal Structuring Notation) according to EN5012x standards; (2) proposing a quantitative framework based on Dempster-Shafer theory to formalize and assessing the confidence in the Safety Case. A survey amongst safety experts is carried out to estimate the confidence parameters. With these results, an application guidance of this framework is provided based on the Wheel Slide Protection (WSP) system. | |
4. |
Mohammed Talebberrouane, Faisal Khan, Zoubida Lounis:Availability analysis of safety critical systems using advanced fault tree and stochastic Petri net formalisms |
Abstract: Failure scenarios analysis constitutes one of the cornerstones of risk assessment and availability analysis. After a detailed review of available methods, this paper identified two distinct formalisms to analyze failure scenarios and systems' availability: generalized stochastic Petri nets (GSPN) and Fault tree driven Markov processes (FTDMP). The FTDMP formalism is a combination of the Markov process and the fault tree. This aims to overcome fault tree limitations while maintaining the use of deductive logic. The GSPN is a Petri net with probabilistic analysis using Monte Carlo simulation. The effectiveness of both methods is studied through an emergency flare system including a knockout drum. It is observed that GSPN provides a robust and reliable mechanism for accident scenario analysis. It provides additional information such as events' frequencies at operating and failing modes and expected occurrence timing and durations resulting from different complex sequences. Even for multi-state variables which could be used to design a safety management system. Although FTDMP is a powerful formalism, it provides limited information. | |
5. |
Salvatore Distefano, Antonio Puliafito: Reliability and availability analysis of dependent–dynamic systems with DRBDs |
Abstract: Reliability/availability evaluation is an important, often indispensable, step in designing and analyzing (critical) systems, whose importance is constantly growing. When the complexity of a system is high, dynamic effects can arise or become significant. The system might be affected by dependent, cascade, on-demand and/or common cause failures, its units could interfere (load sharing, inter/sequence-dependency), and so on. It is also of great interest to evaluate redundancy and maintenance policies but, since dynamic behaviors usually do not satisfy the stochastic independence assumption, notations such as reliability block diagrams (RBDs), fault trees (FTs) or reliability graphs (RGs) become approximated/simplified techniques, unable to capture dynamic–dependent behaviors. To overcome such problem we developed a new formalism derived from RBDs: the dynamic RBDs (DRBDs). In this paper we explain how the DRBDs notation is able to adequately model and therefore analyze dynamic–dependent behaviors and complex systems. Particular emphasis is given to the modeling and the analysis phases, from both the theoretical and the practical point of views. Several case studies of dynamic–dependent systems, selected from literature and related to different application fields, are proposed. In this way we also compare the DRBDs approach with other methodologies, demonstrating its effectiveness. | |
|
|
|
|
7. |
Antti Pakonen, I Buzhinsky, K Björkmana: Model checking reveals design issues leading to spurious actuation of nuclear instrumentation and control systems |
Abstract: A spurious actuation of an industrial instrumentation and control (I&C) system is a failure mode where the system or its component inadvertently produces an operation without a justified reason to do so. Design issues leading to spurious failures are difficult to analyse, but pose a high risk for safety. Model checking is a formal verification method that can be used for exhaustive analysis of I&C systems. In this paper, we explain how formal properties that address spurious failures can be specified, and how model checking can then be used to verify I&C application logic designs based on vendor-specific function block diagrams. Based on over ten years of successful practical projects in the Finnish nuclear industry, we present 21 real-world design issues (representing 37% of all detected issues), each involving a systemic failure that could lead to spurious actuation of nuclear safety I&C. We then describe how random failures of the underlying hardware architecture — another cause for spurious actuation — can also be included in the models. With an experimental evaluation based on real-world nuclear industry models, we demonstrate that our method can be effectively used for the verification of single failure tolerance. | |
8. |
Dong-Young Lee, Joon Lyou: Availablity Evaluation for Architectures of the Reactor Protection System |
Abstract: The Reactor Protection System (RPS) generates the signal to trip the reactor automatically whenever plant status exceeds the predefined limits. The protection system consists of the k-out-of-m redundant architecture to assure a reliable action. In this paper, the availability analysis method using the binomial process is presented to evaluate the MTBFO (Mean Time Between Forced Outage) of the k-out-of-m structure. The suggested method is compared with the result of the Markov process to verify the validation of the suggested method, and applied to several kinds of RPS architectures for comparative evaluation of the system level reliability. | |
9. | Yang Gyun Oh, Jin Kwon Jeong, Chang Jae Lee, Yoon Hee Lee, Seung Min Baek, Sang Jeong Lee: Fault-tolerant Design for Advanced Diverse Protection System |
Abstract: For the improvement of APR1400 Diverse Protection System (DPS) design, the Advanced DPS (ADPS) has recently been developed to enhance the fault tolerance capability of the system. Major fault masking features of the ADPS compared with the APR1400 DPS are the changes to the channel configuration and reactor trip actuation equipment. To minimize the fault occurrences within the ADPS, and to mitigate the consequences of common-cause failures (CCF) within the safety I&C systems, several fault avoidance design features have been applied in the ADPS. The fault avoidance design features include the changes to the system software classification, communication methods, equipment platform, MMI equipment, etc. In addition, the fault detection, location, containment, and recovery processes have been incorporated in the ADPS design. Therefore, it is expected that the ADPS can provide an enhanced fault tolerance capability against the possible faults within the system and its input/output equipment, and the CCF of safety systems. | |
10. |
Jinsoo Shin, Hanseong Son, Gyunyoung Heo: Cyber Security Risk Evaluation of a Nuclear I&C Using BN and ET |
Abstract: Cyber security is an important issue in the field of nuclear engineering because nuclear facilities use digital equipment and digital systems that can lead to serious hazards in the event of an accident. Regulatory agencies worldwide have announced guidelines for cyber security related to nuclear issues, including U.S. NRC Regulatory Guide 5.71. It is important to evaluate cyber security risk in accordance with these regulatory guides. In this study, we propose a cyber security risk evaluation model for nuclear instrumentation and control systems using a Bayesian network and event trees. As it is difficult to perform penetration tests on the systems, the evaluation model can inform research on cyber threats to cyber security systems for nuclear facilities through the use of prior and posterior information and backpropagation calculations. Furthermore, we suggest a methodology for the application of analytical results from the Bayesian network model to an event tree model, which is a probabilistic safety assessment method. The proposed method will provide insight into safety and cyber security risks. | |
|
|
|
|
|
|
|
|
13. | Cheng Pang, Valeriy Vyatkin: Automatic Model Generation of IEC 61499 Function Block Using Net Condition/Event Systems |
Abstract: The IEC 61499 standard establishes a framework specifically designed for the implementation of decentralized reconfigurable industrial automation systems. However, the process of distributed system's validation and verification is difficult and error-prone. This paper discusses the needs of model generators which are capable of automatically translating IEC 61499 function blocks into formal models following specific execution semantics. In particular, this paper introduces the prototype Net Condition/Event Systems model generator and aims to summarize the generic techniques of model translation. | |
14. | Caio A. R. Dos Santos, Rivalino Matias Jr., Kishor S. Trivedi: An Empirical Exploratory Analysis of Failure Sequences in a Commodity Operating System |
Abstract: A fundamental need for software reliability engineering is to comprehend how software systems fail, which means understanding the dynamics that govern different types of failure manifestation. In this paper, we present an exploratory study on multiple-event failures, looking for systematic patterns of sequences of failures in logs of a commodity operating system. This study is based on real failure data collected from hundreds of computers. The major contribution of this paper is the method proposed to discover patterns of failure sequences and their attributes. The method is generic enough to be applied to any other software system, with minor changes. The empirical findings of this study include 153 different patterns of OS failure sequences discovered, along with statistical analyses of their properties. | |
15. | Jiawen Xiong, Xiangxing Bu, Yanhong Huang, Jianqi Shi, Weigang He: Safety Verification of IEC 61131-3 ST Programs |
Abstract: With the development of the industrial control system, Programmable Logic Controllers (PLCs) are increasingly adopted in the process automation. Moreover, many PLCs play key roles in safety-critical systems like nuclear power plants, where robust and reliable control programs are required. To ensure the quality of programs, testing and verification methods are necessary. In this paper, we present a novel methodology which applies model checking to verifying PLC programs. Specifically, we focus on the Structured Text (ST) language which is a widely-used, high-level programming language defined in the IEC 61131-3 standard. A formal model named Behaviour Model (BM) is defined to specify the behaviour of ST programs. An algorithm based on variable state analysis for automatically extracting the BM from an ST program is given. An algorithm based on the automata-theoretic approach is proposed to verify Linear Temporal Logic (LTL) properties on the BM. Finally, a real-life case study is presented. | |
|
|
|
|
|
|
|
|
|
|
|
|
19 |
Subhasish Mitra, Nirmal R. Saxena, Edward J. McCluskey: Common-Mode Failures in Redundant VLSI Systems: A Survey |
Abstract: This paper presents a survey of CMF (common-mode failures) in redundant systems with emphasis on VLSI (very large scale integration) systems. The paper discusses CMF in redundant systems, their possible causes, and techniques to analyze reliability of redundant systems in the presence of CMF. Current practice and recent results on the use of design diversity techniques for CMF are reviewed. By revisiting the CMF problem in the context of VLSI systems, this paper augments earlier surveys on CMF in nuclear and power-supply systems. The need for quantifiable metrics and effective models for CMF in VLSI systems is re-emphasized. These metrics and models are extremely useful in designing reliable systems. For example, using these metrics and models, system designers and synthesis tools can incorporate diversity in redundant systems to maximize protection against CMF. | |
20 |
Robert Eschbach: Formal Software Inspections: An Industrial Application of Function Tables and Event-B to Software of a Wayside Train Monitoring System |
Abstract: The experience gained in the industrial application of software inspections using Function Tables and Event-B to a subsystem of a Wayside Train Monitoring system (WTMS) is presented in this paper. The WTMS Configuration Management System (CMS) supports the creation and management of configuration data for the WTMS. The correct and reliable implementation of the required system functions, especially those dealing with data handling and data management, is of particular importance for the overall quality of the system since faults in these functions may lead to critical failures and malfunctioning. Therefore, the development of the data handling part of a CMS requires the use of high integrity methods like systematic software inspections in order to ensure the highest quality. Function Tables have been successfully applied for the inspection of safety-critical software. In our industrial project, a special variant of Function Tables was defined that can be easily mapped to formal Event-B specifications. Event-B with its set-theoretic basis for modeling, its concept of refinement and the use of formal proof to ensure correctness of refinement steps, is used to formally analyze the derived Function Tables. The systematic derivation of Function Tables is done by a verification-based inspection using reading technique “stepwise abstraction”. |
2016
1. |
Frank Ortmeier, Gerhard Schellhorn: Formal Fault Tree Analysis - Practical Experiences
|
Abstract:
Safety is an important requirement for many modern systems. To ensure safety of complex critical systems, well-known safety analysis methods have been formalized. This holds in particular for automation sytsems and transportation systems. In this paper we present the formalization of one of the most wide spread safety analysis methods: fault tree analysis (FTA). Formal FTA allows to rigorously reason about completeness of a faulty tree. This means it is possible to prove whether a certain combination of component failures is critical for system failure or not. This is a big step forward as informal reasoning on cause-consequence relations is very error-prone.
We report on our experiences with a real world case study from the domain of railroads. The here presented case study is – to our knowledge – the first complete formal fault tree analysis for an infinite state system. Until now only finite state systems have been analyzed with formal FTA by using model checking.
|
|
2. |
Ulrich Hauptmanns: Semi-quantitative fault tree analysis for process plant safety using frequency and probability ranges
|
Abstract: There is a growing concern in process plant safety to assess risk. Two important motives can be identified: on the one hand, the new Seveso directive, which contains a number of risk-related requirements for process plants, and on the other, the advent of the “Safety Integrity Level” SIL classes. These call for methods of analysis, which enable one to obtain reliable results at reasonable effort. The chief obstacle against the widespread use of fault tree analysis for analyzing the safety of process plants is the dearth of appropriate reliability data. This gives rise to time-consuming and error-prone searches in the analyses performed, which nevertheless suffer from the drawback that they do not exactly reflect the situation because the reliability data is not plant-specific. This being the case, in the present approach generic ranges for reliability data are used. These are considered to reflect the uncertainty of the reliability data caused amongst others by their transfer from the conditions of origin to other plants. These uncertainties are propagated in evaluating the fault tree and yield results at both the minimal cut set and structure function levels characterized by natural language expressions. Numerical results are supplied as additional hints for system improvement. The source for the reliability data assignments is a quality assured evaluation in two plants of the explosives industry. The method is implemented in the program system Semi-Quantitative Fault Tree Analysis (SQUAFTA), which was successfully applied to the analysis of a process plant. | |
3. |
K. Durga Rao,V. Gopika, V.V.S. Sanyasi Rao, H.S. Kushwaha, A.K. Verma, A. Srividya: Dynamic fault tree analysis using Monte Carlo simulation in probabilistic safety assessment |
Abstract: Traditional fault tree (FT) analysis is widely used for reliability and safety assessment of complex and critical engineering systems. The behavior of components of complex systems and their interactions such as sequence- and functional-dependent failures, spares and dynamic redundancy management, and priority of failure events cannot be adequately captured by traditional FTs. Dynamic fault tree (DFT) extend traditional FT by defining additional gates called dynamic gates to model these complex interactions. Markov models are used in solving dynamic gates. However, state space becomes too large for calculation with Markov models when the number of gate inputs increases. In addition, Markov model is applicable for only exponential failure and repair distributions. Modeling test and maintenance information on spare components is also very difficult. To address these difficulties, Monte Carlo simulation-based approach is used in this work to solve dynamic gates. The approach is first applied to a problem available in the literature which is having non-repairable components. The obtained results are in good agreement with those in literature. The approach is later applied to a simplified scheme of electrical power supply system of nuclear power plant (NPP), which is a complex repairable system having tested and maintained spares. The results obtained using this approach are in good agreement with those obtained using analytical approach. In addition to point estimates of reliability measures, failure time, and repair time distributions are also obtained from simulation. Finally a case study on reactor regulation system (RRS) of NPP is carried out to demonstrate the application of simulation-based DFT approach to large-scale problems. | |
4. |
Tony Rosqvist, Risto Tuominen: Qualification of Formal Safety Assessment: an exploratory study |
Abstract: Formal Safety Assessment (FSA) is an approach adopted by the International Maritime Organisation (IMO) to support a systemic and structured assessment of proposals for new international regulations to improve shipping safety. Several case studies based on the approach have been conducted during recent years. This paper addresses the issue of confidence in a FSA, as encountered during three case studies conducted by the authors over the years 1998–2002. A peer review process—FSA qualification—is introduced to support the consolidation of confidence in FSA results. Some qualification criteria are suggested. | |
5. |
Chin-Feng Fan, Wen-Shing Wang: Validation test case generation based on safety analysis ontology |
Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage. | |
6. |
P. Arun Babu,C. Senthil Kumar ,N. Murali: A hybrid approach to quantify software reliability in nuclear safety systems |
Abstract:
Technological advancements have led to the use of computer based systems in safety critical applications. As computer based systems are being introduced in nuclear power plants, effective and efficient methods are needed to ensure dependability and compliance to high reliability requirements of systems important to safety. Even after several years of research, quantification of software reliability remains controversial and unresolved issue. Also, existing approaches have assumptions and limitations, which are not acceptable for safety applications. This paper proposes a theoretical approach combining software verification and mutation testing to quantify the software reliability in nuclear safety systems. The theoretical results obtained suggest that the software reliability depends on three factors: the test adequacy, the amount of software verification carried out and the reusability of verified code in the software. The proposed approach may help regulators in licensing computer based safety systems in nuclear reactors. |
|
7. |
Wan-Hui Tseng, Chin-Feng Fan: Systematic scenario test case generation for nuclear safety systems |
Abstract:
ContextThe current validation tests for nuclear software are routinely performed by random testing, which leads to uncertain test coverage. Moreover, validation tests should directly verify the system’s compliance with the original user’s needs. Unlike current model-based testing methods, which are generally based on requirements or design models, the proposed model is derived from the original user’s needs in text through domain-specific ontology, and then used to generate validation tests systematically. ObjectiveOur first goal is to develop an objective, repeatable, and efficient systematic validation test scheme that is effective for large systems, with analyzable test coverage. Our second goal is to provide a new model-based validation testing method that reflects the user’s original safety needs. MethodA model-based scenario test case generation for nuclear digital safety systems was designed. This was achieved by converting the scenarios described in natural language in a Safety Analysis Report (SAR) prepared by the power company for licensing review, to Unified Modeling Language (UML) sequence diagrams based on a proposed ontology of a related regulatory standard. Next, we extracted the initial environmental parameters and the described operational sequences. We then performed variations on these data to systematically generate a sufficient number of scenario test cases. ResultsTest coverage criteria, which are the equivalence partition coverage of initial environment, the condition coverage, the action coverage and the scenario coverage, were met using our method. ConclusionThe proposed model-based scenario testing can provide improved testing coverage than random testing. A test suite based on user needs can be provided. |
|
8. |
Christian Raspotnig, Andreas Opdahl: Comparing risk identification techniques for safety and security requirements |
Abstract: When developing systems where safety and security are important aspects, these aspects have to be given special attention throughout the development, in particular in the requirements phase. There are many similar techniques within the safety and security fields, but few comparisons about what lessons that could be learnt and benefits to be gained. In this paper different techniques for identifying risk, hazard and threat of computer-supported systems are compared. This is done by assessing the techniques’ ability to identify different risks in computer-supported systems in the environment where they operate. The purpose of this paper is therefore to investigate whether and how the techniques can mutually strengthen each other. The result aids practitioners in the selection and combination of techniques and researchers in focusing on gaps between the two fields. Among other things, the findings suggest that many safety techniques enforce a creative and systematic process by applying guide-words and structuring the results in worksheets, while security techniques tend to integrate system models with security models. | |
9. |
Igor Nai Fovino, Luca Guidi,Marcelo Masera, Alberto Stefanini: Cyber security assessment of a power plant |
Abstract: Critical infrastructures and systems are today exposed not only to traditional safety and availability problems, but also to new kinds of security threats. These are mainly due to the large number of new vulnerabilities and architectural weaknesses introduced by the extensive use of information and communication technologies (ICT) into such complex systems. In this paper we present the outcomes of an exhaustive ICT security assessment, targeting an operational power plant, which consisted also of the simulation of potential cyber attacks. The assessment shows that the plant is considerably vulnerable to malicious attacks. This situation cannot be ignored, because the potential outcomes of an induced plant malfunction can be severe. | |
10. |
Liisa Haarla, Urho Pulkkinen, Mikko Koskinen, Jussi Jyrinsalo: A method for analysing the reliability of a transmission grid |
Abstract: The paper describes a probabilistic method for transmission grid security evaluation. Power system security is the ability of the power system to withstand sudden disturbances such as short circuits. The method presented here uses event and fault trees and combines them with power system dynamic simulations. Event trees model the substation protection and trip operations after line faults. Different event tree end states (fault duration, circuit breaker trips) are simulated with power system dynamic analysis program. The dynamic analysis results (power system post-fault states) are then classified into secure, alert, emergency and system breakdown. The probabilities, minimal cut sets and grid level importance measures (Fussell-Vesely, risk increase and decrease factors) are calculated for the total and partial system breakdown. In this way, the relative importance of the substation devices regarding to the system breakdown can be reached. Also the more and less likely contributing factors to system breakdown are received. With this method, an existing 400 kV transmission grid with its line fault and device failure statistics is analysed. | |
11. |
J. Lahtinen, J. Valkonen, K. Björkman, J. Frits, I. Niemela, K. Heljanko: Model checking of safety-critical software in the nuclear engineering domain |
Abstract: Instrumentation and control (I&C) systems play a vital role in the operation of safety-critical processes. Digital programmable logic controllers (PLC) enable sophisticated control tasks which sets high requirements for system validation and verification methods. Testing and simulation have an important role in the overall verification of a system but are not suitable for comprehensive evaluation because only a limited number of system behaviors can be analyzed due to time limitations. Testing is also performed too late in the development lifecycle and thus the correction of design errors is expensive. This paper discusses the role of formal methods in software development in the area of nuclear engineering. It puts forward model checking, a computer-aided formal method for verifying the correctness of a system design model, as a promising approach to system verification. The main contribution of the paper is the development of systematic methodology for modeling safety critical systems in the nuclear domain. Two case studies are reviewed, in which we have found errors that were previously not detected. We also discuss the actions that should be taken in order to increase confidence in the model checking process. | |
12. |
A. Nicholson, S. Webber, S. Dyer, T. Patel, H. Janicke: SCADA security in the light of Cyber-Warfare |
Abstract: Supervisory Control and Data Acquisition (SCADA) systems are deployed worldwide in many critical infrastructures ranging from power generation, over public transport to industrial manufacturing systems. Whilst contemporary research has identified the need for protecting SCADA systems, these information are disparate and do not provide a coherent view of the threats and the risks resulting from the tendency to integrate these once isolated systems into corporate networks that are prone to cyber attacks. This paper surveys ongoing research and provides a coherent overview of the threats, risks and mitigation strategies in the area of SCADA security. | |
13. | Gervwin Klein et al: seL4: Formal verification of an OS Kernel |
Abstract: Complete formal veri cation is the only known way to guarantee that a system is free of programming errors. We present our experience in performing the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation. We assume correctness of compiler, assembly code, and hardware, and we used a unique design approach that fuses formal and operating systems techniques. To our knowledge, this is the first formal proof of functional correctness of a complete, general-purpose operating-system kernel. Functional correctness means here that the implementation always strictly follows our high-level abstract specification of kernel behaviour. This encompasses traditional design and implementation safety proper- ties such as the kernel will never crash, and it will never perform an unsafe operation. It also proves much more: we can predict precisely how the kernel will behave in every possible situation. seL4, a third-generation microkernel of L4 provenance, comprises 8,700 lines of C code and 600 lines of assembler. Its performance is comparable to other high-performance L4 kernels. |
|
14. | Kim G Larsen, Marius Mikucioni, Brian Nielsen, Arne Skou: Testgin Real-Time Embedded Software using UPPAAL-TRON |
Abstract: UPPAAL-Tron is a new tool for model based online black-box conformance testing of real-time embedded systems specified as timed automata. In this paper we present our experiences in applying our tool and technique on an industrial case study. We conclude that the tool and technique is applicable to practical systems, and that it has promising error detection potential and execution performance | |
15. | Moonzoo Kim: A Comaprative Study of Software Model Checkers as Unit Testing Tools: An Industrial Case Study |
Abstract: Conventional testing methods often fail to detect hidden flaws in complex embedded software such as device drivers or file systems. This deficiency incurs significant development and support/maintenance cost for the manufacturers. Model checking techniques have been proposed to compensate for the weaknesses of conventional testing methods through exhaustive analyses. Whereas conventional model checkers require manual effort to create an abstract target model, modern software model checkers remove this overhead by directly analyzing a target C program, and can be utilized as unit testing tools. However, since software model checkers are not fully mature yet, they have limitations according to the underlying technologies and tool implementations, potentially critical issues when applied in industrial projects. This paper reports our experience in applying Blast and CBMC to testing the components of a storage platform software for flash memory. Through this project, we analyzed the strong and weak points of two different software model checking technologies in the viewpoint of real-world industrial application—counterexample-guided abstraction refinement with predicate abstraction and SAT-based bounded analysis. | |
16 | Mirko Jakovljevic, Astrit Ademaj: Distributed IMA: Use cases for embedded platforms |
Abstract:
Distributed IMA, as described in RTCA DO-297, has been used in different forms since 1980s in avionics and modular aircraft control architectures which deploy synchronous or time-division multiplexing for deterministic communication. Such a time-driven communication enables specific capabilities in architecture and embedded platform design, which cannot be realized by asynchronous communication. In complex integrated systems, Distributed IMA enables full virtualization of embedded resources to host distributed hard RT functions. Distributed IMA supports full separation of controlled objects (actuators), input/output devices and sensors from the computing modules in aerospace and other cross-industry applications. This simplifies reconfiguration, and design of generic open architectures. With synchronous and time-division multiplexing in Ethernet networks, critical functions can also be hosted in open systems or in systems where the behavior of non-critical functions cannot be fully assessed.
|
|
17 |
Kateryna Netkachova, Kevin Müller, Michael Paulitsch, Robin Bloomfield: Investigation into a layered approach to architecting security-informed safety cases |
Abstract: The paper describes a layered approach to analysing safety and security in a structured way and creating a security-informed safety case. The approach is applied to a case study - a Security Gateway controlling data flow between two different security domains implemented with a separation kernel based operating system in an avionics environment. We discuss some findings from the case study, show how the approach identifies and ameliorates important interactions between safety and security and supports the development of complex assurance case structures. |
2015
1. | Per Johannessen, Fredrik Törner and Jan Torin: Actuator Based Hazard Analysis for Safety Critical Systems |
Abstract: In the early stages of a design process, a detailed hazard analysis should be performed, particularly for safety critical systems. In this paper an actuator based hazard analysis method is presented. Since it is the actuators that affect the systems environment, this actuator based approach is the logical approach for an early hazard analysis when only limited information of the system implementation is available. This approach is also unique since all identified failures are distributed on four different severities. A criticality ranking is assigned to each failure as a combination of the severities and their distribution. This ranking is also used to give an indication of the preferred fail states. For the hazards resulting in a high criticality that needs to be handled, the method supports a solvability analysis between different design solutions. This solvability analysis rewards design concepts that handles hazards with high criticality numbers. | |
2. | Karl Koscher, Alexei Czeskis, Franziska Roesner, Shwetak Patel, and Tadayoshi Kohno: Experimental Security Analysis of a Modern Automobile |
Abstract: Modern automobiles are no longer mere mechanical devices; they are pervasively monitored and controlled by dozens of digital computers coordinated via internal vehicular networks. While this transformation has driven major advancements in efficiency and safety, it has also introduced a range of new potential risks. In this paper we experimentally evaluate these issues on a modern automobile and demonstrate the fragility of the underlying system structure. We demonstrate that an attacker who is able to infiltrate virtually any Electronic Control Unit (ECU) can leverage this ability to completely circumvent a broad array of safety-critical systems. Over a range of experiments, both in the lab and in road tests, we demonstrate the ability to adversarially control a wide range of automotive functions and completely ignore driver input—including disabling the brakes, selectively braking individual wheels on demand, stopping the engine, and so on. We find that it is possible to bypass rudimentary network security protections within the car, such as maliciously bridging between our car’s two internal subnets. We also present composite attacks that leverage individual weaknesses, including an attack that embeds malicious code in a car’s telematics unit and that will completely erase any evidence of its presence after a crash. Looking forward, we discuss the complex challenges in addressing these vulnerabilities while considering the existing automotive ecosystem. |
|
3. | Jane Huffman Hayes: Building a Requirement Fault Taxonomy: Experiences from a NASA Verification and Validation Research Project |
Abstract: Fault-based analysis is an early lifecycle approach to improving software quality by preventing and/or detecting pre-specified classes of faults prior to implementation. It assists in the selection of verification and validation techniques that can be applied in order to reduce risk. This paper presents our methodology for requirements-based fault analysis and its application to National Aeronautics and Space Administration (NASA) projects. The ideas presented are general enough to be applied immediately to the development of any software system. We built a NASA-specific requirement fault taxonomy and processes for tailoring the taxonomy to a class of software projects or to a specific project. We examined requirement faults for six systems, including the International Space Station (ISS), and enhanced the taxonomy and processes. The developed processes, preliminary tailored taxonomies for Critical/Catastrophic High-Risk (CCHR) systems, preliminary fault occurrence data for the ISS project, and lessons learned are presented and discussed. | |
4. | Shan Lu, Soyeon Park, Eunsoo Seo and Yuanyuan Zhou: Learning from Mistakes — A Comprehensive Study on Real World Concurrency Bug Characteristics |
Abstract: The reality of multi-core hardware has made concurrent programs pervasive. Unfortunately, writing correct concurrent programs is difficult. Addressing this challenge requires advances in multiple directions, including concurrency bug detection, concurrent program testing, concurrent programming model design, etc. Designing effective techniques in all these directions will significantly benefit from a deep understanding of real world concurrency bug characteristics. This paper provides the first (to the best of our knowledge) comprehensive real world concurrency bug characteristic study. Specifically, we have carefully examined concurrency bug patterns, manifestation, and fix strategies of 105 randomly selected real world concurrency bugs from 4 representative server and client open-source applications (MySQL, Apache, Mozilla and OpenOffice). Our study reveals several interesting findings and provides useful guidance for concurrency bug detection, testing, and concurrent programming language design. Some of our findings are as follows: (1) Around one third of the examined non-deadlock concurrency bugs are caused by violation to programmers’ order intentions, which may not be easily expressed via synchronization primitives like locks and transactional memories; (2) Around 34% of the examined non-deadlock concurrency bugs involve multiple variables, which are not well addressed by existing bug detection tools; (3) About 92% of the examined concurrency bugs can be reliably triggered by enforcing certain orders among no more than 4 memory accesses. This indicates that testing concurrent programs can target at exploring possible orders among every small groups of memory accesses, instead of among all memory accesses; (4) About 73% of the examined non-deadlock concurrency bugs were not fixed by simply adding or changing locks, and many of the fixes were not correct at the first try, indicating the difficulty of reasoning concurrent execution by programmers. | |
5. | Alan Wassyng and Mark Lawford: Lessons Learned from a Successful Implementation of Formal Methods in an Industrial Project |
Abstract: This paper describes the lessons we learned over a thirteen year period while helping to develop the shutdown systems for the nuclear generating station at Darlington, Ontario, Canada. We begin with a brief description of the project and then show how we modified processes and notations developed in the academic community so that they are acceptable for use in industry. We highlight some of the topics that proved to be particularly challenging and that would benefit from more in-depth study without the pressure of project deadlines. | |
6. | P. Baufreton et al: Multi-domain comparison of safety standards |
Abstract: This paper presents an analysis of safety standards and their implementation in certification strategies from different domains such as aeronautics, automation, automotive, nuclear, railway and space. This work, performed in the context of the CG2E (“Club des Grandes Entreprises de l’Embarqué”), aims at identifying the main similarities and dissimilarities, for potential cross-domain harmonization. We strive to find the most comprehensive ‘trans-sectorial’ approach, within a large number of industrial domains. Exhibiting the ‘true goals’ of their numerous applicable standards, related to the safety of system and software, is a first important step towards harmonization, sharing common approaches, methods and tools whenever possible. | |
7. | Alejandro Torres-Echeverria: On the Use of LOPA and Risk Graphs for SIL determination |
Abstract: Safety Integrity Level (SIL), as defined in ANSI/ISA S84.00.01 (IEC 61511-mod), is a widely used safety performance measure for safety instrumented functions. The standard ISA S84.00.01 suggests several methods for SIL determination, ranging from fully quantitative methods to fully qualitative methods. The large number of safety functions to evaluate during plant design and the need to integrate multidisciplinary design and operation knowledge to achieve effective risk reduction, has made necessary the use of multi-disciplinary-team workshop approaches. Two widely used methods in the O&G industry for SIL determination are Layer of Protection Analysis (LOPA) and Risk Graphs. Each of these methods has their own advantages and disadvantages. LOPA allows the required risk reduction to be incorporated into the SIL values with higher precision. This enables a more detailed consideration of the available protection layers and leaves an objective traceable record of the decision-making process. In contrast, the simplicity of Risk Graphs makes them convenient for screening a large number of safety functions. This can make Risk Graphs useful as a first screening pass prior to using LOPA. However, Risk Graphs are still widely used as a stand-alone method. This paper seeks to explore the differences between LOPA and Risks graphs and to investigate whether the Risk Graphs method can provide the same level of SIL determination rigor as LOPA. The paper aims to determine if the simplicity of Risk Graphs can make that method more efficient for cases when the number of safety functions to evaluate is considerable. | |
8. | Florian Sagstetter, et al: Security Challenges in Automotive Hardware/Software Architecture Design |
Abstract: This paper is an introduction to security challenges for the design of automotive hardware/software architectures. State-of-the-art automotive architectures are highly heterogeneous and complex systems that rely on distributed functions based on electronics and software. As cars are getting more connected with their environment, the vulnerability to attacks is rapidly growing. Examples for such wireless communication are keyless entry systems, WiFi, or Bluetooth. Despite this increasing vulnerability, the design of automotive architectures is still mainly driven by safety and cost issues rather than security. In this paper, we present potential threats and vulnerabilities, and outline upcoming security challenges in automotive architectures. In particular, we discuss the challenges arising in electric vehicles, like the vulnerability to attacks involving tampering with the battery safety. Finally, we discuss future automotive architectures based on Ethernet/IP and how formal verification methods might be used to increase their security. | |
9. | P. Cuenot et al: Applying Model Based Techniques for Early Safety Evaluation of an Automotive Architecture in Compliance with the ISO 26262 Standard |
Abstract: In 2011, the automotive industry introduced the application of a standardized process for functional safety-related development of automotive electronic products. The related international standard, ISO 26262 functional safety for road vehicles, has high demands on process documentation and analysis. Within an engineering context this challenges the tremendous increase of complexity for modern automotive systems and high productivity demands for industrial competiveness purpose. Model based development techniques based on an Architecture Description Language (ADL) has been identified as the best candidate to manage the system complexity and the related safety analysis with the benefit of formal description and capabilities for test automation. The proposed concept relies on the definition of a compositional error modeling approach tightly coupled with the system architecture model, capable to analyze the software and hardware architectures and implementations. This paper explains the results of the language extension based on the EAST-ADL and AUTOSAR domain model in terms of early safety evaluation of an automotive architecture, automating the qualitative and quantitative assessment of road vehicle products as claimed by the application of the ISO 26262. | |
10. | Ganesh J Pai, Joanne Bechta Dugan : Automatic Synthesis of Dynamic Fault Trees from UML System Models |
Abstract: The reliability of a computer-based system may be as important as its performance and its correctness of computation. It is worthwhile to estimate system reliability at the conceptual design stage, since reliability can influence the subsequent design decisions and may often be pivotal for making trade-offs or in establishing system cost. In this paper we describe a framework for modeling computer-based systems, based on the Unified Modeling Language (UML), that facilitates automated dependability analysis during design. An algorithm to automatically synthesize dynamic fault trees (DFTs) from the UML system model is developed. We succeed both in embedding information needed for reliability analysis within the system model and in generating the DFT. Thereafter, we evaluate our approach using examples of real systems. We analytically compute system unreliability from the algorithmically developed DFT and we compare our results with the analytical solution of manually developed DFTs. Our solutions produce the same results as manually generated DFTs. | |
11. | Andrea Arcuri, Muhammad Zohaib Iqbal and Lionel Briand: Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing |
Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML-MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies. | |
12. | Bernhard K. Aichernig, Klaus Hormaier, and Florian Lorber: Debugging with Timed Automata Mutations |
Model-based Debugging is an application of Model-based Diagnosis techniques to debugging computer systems. Its basic principle is to compare a model, i.e., a description of the correct behaviour of a system, to the observed behaviour of the system. In this paper we show how this technique can be applied in the context of model-based mutation testing (MBMT) with timed automata. In MBMT we automatically generate a set of test sequences out of a test model. In contrast to general model-based testing, the test cases of MBMT cover a pre-defined set of faults that have been injected into the model (model mutation). Our automatic debugging process is purely black-box. If a test run fails, our tool reports a diagnosis as a set of model mutations. These mutations provide possible explanations why the test case has failed. For reproducing the failure, we also generate a set of minimal test cases leading to the implementation fault. The technique is implemented for Uppaal’s timed automata models and is based on a language inclusion check via bounded model checking. It adds debugging capability to our existing test-case generators. A car-alarm system serves as illustrating case study
|
|
13. | Elena Gómez-Martínez, et al:A Methodology for Model-based Verification of Safety Contracts |
The verification of safety requirements becomes crucial in critical systems where human lives depend on their correct functioning. Formal methods have often been advocated as necessary to ensure the reliability of software systems, albeit with a considerable effort. In any case, such an effort is cost-effective when verifying safety-critical systems. Often, safety requirements are expressed using safety contracts, in terms of assumptions and guarantees. To facilitate the adoption of formal methods in the safety-critical software industry, we propose a methodology based on well-known modelling languages such as UML and OCL. UML is used to model the software system while OCL is used to express the system safety contracts within UML. In the proposed methodology a UML model enriched with OCL constraints is transformed to a Petri net model that enables to formally verify such safety contracts. The methodology is evaluated on an industrial case study. The proposed approach allows an early safety verification to be performed, which increases the confidence of software engineers while designing the system. | |
14. | Ewen Dennay, et al:Perspectives on Software Safety Case Development for Unmanned Aircraft |
We describe our experience with the ongoing development of a safety case for an unmanned aircraft system (UAS), emphasizing autopilot software safety assurance. Our approach combines formal and non-formal reasoning, yielding a semi-automatically assembled safety case, in which part of the argument for autopilot software safety is automatically generated from formal methods. This paper provides a discussion of our experiences pertaining to (a) the methodology for creating and structuring safety arguments containing heterogeneous reasoning and information (b) the comprehensibility of, and the confidence in, the arguments created, and (c) the implications of development and safety assurance processes. The considerations for assuring aviation software safety, when using an approach such as the one in this paper, are also discussed in the context of the relevant standards and existing (process-based) certification guidelines. | |
15. | Ivan Studnia, et al: Survey on Security Threats and Protection Mechanisms in Embedded Automotive Networks |
Embedded electronic components, so-called ECU (Electronic Controls Units), are nowadays a prominent part of a car’s architecture. These ECUs, monitoring and controlling the different subsystems of a car, are interconnected through several gateways and compose the global internal network of the car. Moreover, modern cars are now able to communicate with other devices through wired or wireless interfaces such as USB, Bluetooth, WiFi or even 3G. Such interfaces may expose the internal network to the outside world and can be seen as entry points for cyber attacks. In this paper, we present a survey on security threats and protection mechanisms in embedded automotive networks. After introducing the different protocols being used in the embedded networks of current vehicles, we then analyze the potential threats targeting these networks and describe how the attackers’ opportunities can be enhanced by the new communication abilities of modern cars. Finally, we present the security solutions currently being devised to address these problems. | |
16. | Gerwin Klein: seL4: Formal Verification of an Operating-System Kernel |
We report on the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation. We assume correctness of compiler, assembly code, hardware, and boot code. seL4 is a third-generation microkernel of L4 provenance, comprising 8700 lines of C and 600 lines of assembler. Its performance is comparable to other high-performance L4 kernels. We prove that the implementation always strictly follows our high-level abstract specification of kernel behavior. This encompasses traditional design and implementation safety properties such as that the kernel will never crash, and it will never perform an unsafe operation. It also implies much more: we can predict precisely how the kernel will behave in every possible situation. | |
17. | Philippa Conmy, Mark Nicholson, John McDermid: Safety Assurance Contracts for Integrated Modular Avionics |
This paper describes a method for performing safety analysis on an Integrated Modular Avionics system in a manner which supports the incremental development and change of system components. This is achieved by analysing each component in the context of the overall system design and then finding derived safety requirements. Each IMA component (hardware, software or both) is then examined to determine how these safety requirements are met, and a contract is formed which captures the rely/guarantee conditions between that component and any component which relies on it. This contract captures the behaviour which must be preserved by a component following incremental change. | |
18. | B. Annighöfer, F. Thielecke: Supporting the design of Distributed Integrated Modular Avionics system with binary programming |
Distributed integrated modular avionics (DIMA) is a promising concept in aircraft avionics. Aircraft systems share resources like calculation power, memory, and sensor/actuator interfaces. Resources are provided by generalized devices, which can be installed in distributed aircraft locations. Because of the size and complexity, valid and optimal design of such systems is, however, a hard task if carried out manually. It is shown how to support this difficult task by solving subtasks of architectural design as mathematical optimization problems. Allocation problems of both, software mapping and device installation, are formulated as binary integer programs. Those are used to optimize full or sub-parts of avionics architectures for certain objectives, e.g. mass and operational interruption cost, while considering all resource and secondary system requirements. A suitable global optimal solver is proposed for solving resulting combinatorial optimization problems, which are challenging in complexity and size. The potential of the proposed approaches is demonstrated with a reference architecture composed of four redundant aircraft systems. In comparison with manual mappings this reveals optimization potentials up to 45%, while calculation times stay below one minute. |