946 resultados para Formal Verification Methods
Resumo:
Formal methods should be used to specify and verify on-card software in Java Card applications. Furthermore, Java Card programming style requires runtime verification of all input conditions for all on-card methods, where the main goal is to preserve the data in the card. Design by contract, and in particular, the JML language, are an option for this kind of development and verification, as runtime verification is part of the Design by contract method implemented by JML. However, JML and its currently available tools for runtime verification were not designed with Java Card limitations in mind and are not Java Card compliant. In this thesis, we analyze how much of this situation is really intrinsic of Java Card limitations and how much is just a matter of a complete re-design of JML and its tools. We propose the requirements for a new language which is Java Card compliant and indicate the lines on which a compiler for this language should be built. JCML strips from JML non-Java Card aspects such as concurrency and unsupported types. This would not be enough, however, without a great effort in optimization of the verification code generated by its compiler, as this verification code must run on the card. The JCML compiler, although being much more restricted than the one for JML, is able to generate Java Card compliant verification code for some lightweight specifications. As conclusion, we present a Java Card compliant variant of JML, JCML (Java Card Modeling Language), with a preliminary version of its compiler
Resumo:
Java Card technology allows the development and execution of small applications embedded in smart cards. A Java Card application is composed of an external card client and of an application in the card that implements the services available to the client by means of an Application Programming Interface (API). Usually, these applications manipulate and store important information, such as cash and confidential data of their owners. Thus, it is necessary to adopt rigor on developing a smart card application to improve its quality and trustworthiness. The use of formal methods on the development of these applications is a way to reach these quality requirements. The B method is one of the many formal methods for system specification. The development in B starts with the functional specification of the system, continues with the application of some optional refinements to the specification and, from the last level of refinement, it is possible to generate code for some programming language. The B formalism has a good tool support and its application to Java Card is adequate since the specification and development of APIs is one of the major applications of B. The BSmart method proposed here aims to promote the rigorous development of Java Card applications up to the generation of its code, based on the refinement of its formal specification described in the B notation. This development is supported by the BSmart tool, that is composed of some programs that automate each stage of the method; and by a library of B modules and Java Card classes that model primitive types, essential Java Card API classes and reusable data structures
Resumo:
The use of increasingly complex software applications is demanding greater investment in the development of such systems to ensure applications with better quality. Therefore, new techniques are being used in Software Engineering, thus making the development process more effective. Among these new approaches, we highlight Formal Methods, which use formal languages that are strongly based on mathematics and have a well-defined semantics and syntax. One of these languages is Circus, which can be used to model concurrent systems. It was developed from the union of concepts from two other specification languages: Z, which specifies systems with complex data, and CSP, which is normally used to model concurrent systems. Circus has an associated refinement calculus, which can be used to develop software in a precise and stepwise fashion. Each step is justified by the application of a refinement law (possibly with the discharge of proof obligations). Sometimes, the same laws can be applied in the same manner in different developments or even in different parts of a single development. A strategy to optimize this calculus is to formalise these application as a refinement tactic, which can then be used as a single transformation rule. CRefine was developed to support the Circus refinement calculus. However, before the work presented here, it did not provide support for refinement tactics. The aim of this work is to provide tool support for refinement tactics. For that, we develop a new module in CRefine, which automates the process of defining and applying refinement tactics that are formalised in the tactic language ArcAngelC. Finally, we validate the extension by applying the new module in a case study, which used the refinement tactics in a refinement strategy for verification of SPARK Ada implementations of control systems. In this work, we apply our module in the first two phases of this strategy
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies
Resumo:
Removing inconsistencies in a project is a less expensive activity when done in the early steps of design. The use of formal methods improves the understanding of systems. They have various techniques such as formal specification and verification to identify these problems in the initial stages of a project. However, the transformation from a formal specification into a programming language is a non-trivial task and error prone, specially when done manually. The aid of tools at this stage can bring great benefits to the final product to be developed. This paper proposes the extension of a tool whose focus is the automatic translation of specifications written in CSPM into Handel-C. CSP is a formal description language suitable for concurrent systems, and CSPM is the notation used in tools support. Handel-C is a programming language whose result can be compiled directly into FPGA s. Our extension increases the number of CSPM operators accepted by the tool, allowing the user to define local processes, to rename channels in a process and to use Boolean guards on external choices. In addition, we also propose the implementation of a communication protocol that eliminates some restrictions on parallel composition of processes in the translation into Handel-C, allowing communication in a same channel between multiple processes to be mapped in a consistent manner and that improper communication in a channel does not ocurr in the generated code, ie, communications that are not allowed in the system specification
Resumo:
The component-based development of systems revolutionized the software development process, facilitating the maintenance, providing more confiability and reuse. Nevertheless, even with all the advantages of the development of components, their composition is an important concern. The verification through informal tests is not enough to achieve a safe composition, because they are not based on formal semantic models with which we are able to describe precisally a system s behaviour. In this context, formal methods provide ways to accurately specify systems through mathematical notations providing, among other benefits, more safety. The formal method CSP enables the specification of concurrent systems and verification of properties intrinsic to them, as well as the refinement among different models. Some approaches apply constraints using CSP, to check the behavior of composition between components, assisting in the verification of those components in advance. Hence, aiming to assist this process, considering that the software market increasingly requires more automation, reducing work and providing agility in business, this work presents a tool that automatizes the verification of composition among components, in which all complexity of formal language is kept hidden from users. Thus, through a simple interface, the tool BST (BRIC-Tool-Suport) helps to create and compose components, predicting, in advance, undesirable behaviors in the system, such as deadlocks
Resumo:
MODSI is a multi-models tool for information systems modeling. A modeling process in MODSI can be driven according to three different approaches: informal, semi-formal and formal. The MODSI tool is therefore based on the linked usage of these three modeling approaches. It can be employed at two different levels: the meta-modeling of a method and the modeling of an information system.In this paper we start presenting different types of modeling by making an analysis of their particular features. Then, we introduce the meta-model defined in our tool, as well as the tool functional architecture. Finally, we describe and illustrate the various usage levels of this tool.
Resumo:
To assist cattle producers transition from microsatellite (MS) to single nucleotide polymorphism (SNP) genotyping for parental verification we previously devised an effective and inexpensive method to impute MS alleles from SNP haplotypes. While the reported method was verified with only a limited data set (N = 479) from Brown Swiss, Guernsey, Holstein, and Jersey cattle, some of the MS-SNP haplotype associations were concordant across these phylogenetically diverse breeds. This implied that some haplotypes predate modern breed formation and remain in strong linkage disequilibrium. To expand the utility of MS allele imputation across breeds, MS and SNP data from more than 8000 animals representing 39 breeds (Bos taurus and B. indicus) were used to predict 9410 SNP haplotypes, incorporating an average of 73 SNPs per haplotype, for which alleles from 12 MS markers could be accurately be imputed. Approximately 25% of the MS-SNP haplotypes were present in multiple breeds (N = 2 to 36 breeds). These shared haplotypes allowed for MS imputation in breeds that were not represented in the reference population with only a small increase in Mendelian inheritance inconsistancies. Our reported reference haplotypes can be used for any cattle breed and the reported methods can be applied to any species to aid the transition from MS to SNP genetic markers. While ~91% of the animals with imputed alleles for 12 MS markers had ≤1 Mendelian inheritance conflicts with their parents' reported MS genotypes, this figure was 96% for our reference animals, indicating potential errors in the reported MS genotypes. The workflow we suggest autocorrects for genotyping errors and rare haplotypes, by MS genotyping animals whose imputed MS alleles fail parentage verification, and then incorporating those animals into the reference dataset.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Psicologia - FCLAS
Resumo:
Composites are engineered materials that take advantage of the particular properties of each of its two or more constituents. They are designed to be stronger, lighter and to last longer which can lead to the creation of safer protection gear, more fuel efficient transportation methods and more affordable materials, among other examples. This thesis proposes a numerical and analytical verification of an in-house developed multiscale model for predicting the mechanical behavior of composite materials with various configurations subjected to impact loading. This verification is done by comparing the results obtained with analytical and numerical solutions with the results found when using the model. The model takes into account the heterogeneity of the materials that can only be noticed at smaller length scales, based on the fundamental structural properties of each of the composite’s constituents. This model can potentially reduce or eliminate the need of costly and time consuming experiments that are necessary for material characterization since it relies strictly upon the fundamental structural properties of each of the composite’s constituents. The results from simulations using the multiscale model were compared against results from direct simulations using over-killed meshes, which considered all heterogeneities explicitly in the global scale, indicating that the model is an accurate and fast tool to model composites under impact loads. Advisor: David H. Allen
Resumo:
The decreasing number of women who are graduating in the Science, Technology, Engineering and Mathematics (STEM) fields continues to be a major concern. Despite national support in the form of grants provided by National Science Foundation, National Center for Information and Technology and legislation passed such as the Deficit Reduction Act of 2005 that encourages women to enter the STEM fields, the number of women actually graduating in these fields is surprisingly low. This research study focuses on a robotics competition and its ability to engage female adolescents in STEM curricula. Data have been collected to help explain why young women are reticent to take technology or engineering type courses in high school and college. Factors that have been described include attitudes, parental support, social aspects, peer pressure, and lack of role models. Often these courses were thought to have masculine and “nerdy” overtones. The courses were usually majority male enrollments and appeared to be very competitive. With more female adolescents engaging in this type of competitive atmosphere, this study gathered information to discover what about the competition appealed to these young women. Focus groups were used to gather information from adolescent females who were participating in the First Lego League (FLL) and CEENBoT competitions. What enticed them to participate in a curriculum that data demonstrated many of their peers avoided? FLL and CEENBoT are robotics programs based on curricula that are taught in afterschool programs in non-formal environments. These programs culminate in a very large robotics competition. My research questions included: What are the factors that encouraged participants to participate in the robotics competition? What was the original enticement to the FLL and CEENBoT programs? What will make participants want to come back and what are the participants’ plans for the future? My research mirrored data of previous findings such as lack of role models, the need for parental support, social stigmatisms and peer pressure are still major factors that determine whether adolescent females seek out STEM activities. An interesting finding, which was an exception to previous findings, was these female adolescents enjoyed the challenge of the competition. The informal learning environments encouraged an atmosphere of social engagement and cooperative learning. Many volunteers that led the afterschool programs were women (role models) and a majority of parents showed support by accommodating an afterschool situation. The young women that were engaged in the competition noted it was a friendly competition, but they were all there to win. All who participated in the competition had a similar learning environment: competitive but cooperative. Further research is needed to determine if it is the learning environment that lures adolescent females to the program and entices them to continue in the STEM fields or if it is the competitive aspect of the culminating activity. Advisors: James King and Allen Steckelberg
Resumo:
Purpose: This paper presents the application of MAGIC-f gel in a three-dimensional dose distribution measurement and its ability to accurately measure the dose distribution from a tomotherapy unit. Methods: A prostate intensity-modulated radiation therapy (IMRT) irradiation was simulated in the gel phantom and the treatment was delivered by a TomoTherapy equipment. Dose distribution was evaluated by the R2 distribution measured in magnetic resonance imaging. Results: A high similarity was found by overlapping of isodoses of the dose distribution measured with the gel and expected by the treatment planning system (TPS). Another analysis was done by comparing the relative absorbed dose profiles in the measured and in the expected dose distributions extracted along indicated lines of the volume and the results were also in agreement. The gamma index analysis was also applied to the data and a high pass rate was achieved (88.4% for analysis using 3%/3 mm and of 96.5% using 4%/4 mm). The real three-dimensional analysis compared the dose-volume histograms measured for the planning volumes and expected by the treatment planning, being the results also in good agreement by the overlapping of the curves. Conclusions: These results show that MAGIC-f gel is a promise for tridimensional dose distribution measurements. (C) 2012 American Association of Physicists in Medicine. [http://dx.doi.org/10.1118/1.4704496]
Resumo:
Background: Although the release of cardiac biomarkers after percutaneous (PCI) or surgical revascularization (CABG) is common, its prognostic significance is not known. Questions remain about the mechanisms and degree of correlation between the release, the volume of myocardial tissue loss, and the long-term significance. Delayed-enhancement of cardiac magnetic resonance (CMR) consistently quantifies areas of irreversible myocardial injury. To investigate the quantitative relationship between irreversible injury and cardiac biomarkers, we will evaluate the extent of irreversible injury in patients undergoing PCI and CABG and relate it to postprocedural modifications in cardiac biomarkers and long-term prognosis. Methods/Design: The study will include 150 patients with multivessel coronary artery disease (CAD) with left ventricle ejection fraction (LVEF) and a formal indication for CABG; 50 patients will undergo CABG with cardiopulmonary bypass (CPB); 50 patients with the same arterial and ventricular condition indicated for myocardial revascularization will undergo CABG without CPB; and another 50 patients with CAD and preserved ventricular function will undergo PCI using stents. All patients will undergo CMR before and after surgery or PCI. We will also evaluate the release of cardiac markers of necrosis immediately before and after each procedure. Primary outcome considered is overall death in a 5-year follow-up. Secondary outcomes are levels of CK-MB isoenzyme and I-Troponin in association with presence of myocardial fibrosis and systolic left ventricle dysfunction assessed by CMR. Discussion: The MASS-V Trial aims to establish reliable values for parameters of enzyme markers of myocardial necrosis in the absence of manifest myocardial infarction after mechanical interventions. The establishments of these indices have diagnostic value and clinical prognosis and therefore require relevant and different therapeutic measures. In daily practice, the inappropriate use of these necrosis markers has led to misdiagnosis and therefore wrong treatment. The appearance of a more sensitive tool such as CMR provides an unprecedented diagnostic accuracy of myocardial damage when correlated with necrosis enzyme markers. We aim to correlate laboratory data with imaging, thereby establishing more refined data on the presence or absence of irreversible myocardial injury after the procedure, either percutaneous or surgical, and this, with or without the use of cardiopulmonary bypass.
Resumo:
Interaction protocols establish how different computational entities can interact with each other. The interaction can be finalized to the exchange of data, as in 'communication protocols', or can be oriented to achieve some result, as in 'application protocols'. Moreover, with the increasing complexity of modern distributed systems, protocols are used also to control such a complexity, and to ensure that the system as a whole evolves with certain features. However, the extensive use of protocols has raised some issues, from the language for specifying them to the several verification aspects. Computational Logic provides models, languages and tools that can be effectively adopted to address such issues: its declarative nature can be exploited for a protocol specification language, while its operational counterpart can be used to reason upon such specifications. In this thesis we propose a proof-theoretic framework, called SCIFF, together with its extensions. SCIFF is based on Abductive Logic Programming, and provides a formal specification language with a clear declarative semantics (based on abduction). The operational counterpart is given by a proof procedure, that allows to reason upon the specifications and to test the conformance of given interactions w.r.t. a defined protocol. Moreover, by suitably adapting the SCIFF Framework, we propose solutions for addressing (1) the protocol properties verification (g-SCIFF Framework), and (2) the a-priori conformance verification of peers w.r.t. the given protocol (AlLoWS Framework). We introduce also an agent based architecture, the SCIFF Agent Platform, where the same protocol specification can be used to program and to ease the implementation task of the interacting peers.