932 resultados para Verification
Resumo:
High intensity focused ultrasound (HIFU) can be used to control bleeding, both from individual blood vessels as well as from gross damage to the capillary bed. This process, called acoustic hemostasis, is being studied in the hope that such a method would ultimately provide a lifesaving treatment during the so-called "golden hour", a brief grace period after a severe trauma in which prompt therapy can save the life of an injured person. Thermal effects play a major role in occlusion of small vessels and also appear to contribute to the sealing of punctures in major blood vessels. However, aggressive ultrasound-induced tissue heating can also impact healthy tissue and can lead to deleterious mechanical bioeffects. Moreover, the presence of vascularity can limit one’s ability to elevate the temperature of blood vessel walls owing to convective heat transport. In an effort to better understand the heating process in tissues with vascular structure we have developed a numerical simulation that couples models for ultrasound propagation, acoustic streaming, ultrasound heating and blood cooling in Newtonian viscous media. The 3-D simulation allows for the study of complicated biological structures and insonation geometries. We have also undertaken a series of in vitro experiments, in non-uniform flow-through tissue phantoms, designed to provide a ground truth verification of the model predictions. The calculated and measured results were compared over a range of values for insonation pressure, insonation time, and flow rate; we show good agreement between predictions and measurements. We then conducted a series of simulations that address two limiting problems of interest: hemostasis in small and large vessels. We employed realistic human tissue properties and considered more complex geometries. Results show that the heating pattern in and around a blood vessel is different for different vessel sizes, flow rates and for varying beam orientations relative to the flow axis. Complete occlusion and wall- puncture sealing are both possible depending on the exposure conditions. These results concur with prior clinical observations and may prove useful for planning of a more effective procedure in HIFU treatments.
Resumo:
Numerous problems exist that can be modeled as traffic through a network in which constraints exist to regulate flow. Vehicular road travel, computer networks, and cloud based resource distribution, among others all have natural representations in this manner. As these networks grow in size and/or complexity, analysis and certification of the safety invariants becomes increasingly costly. The NetSketch formalism introduces a lightweight verification framework that allows for greater scalability than traditional analysis methods. The NetSketch tool was developed to provide the power of this formalism in an easy to use and intuitive user interface.
Resumo:
Predictability - the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems - possessing properties such as clairvoyance, caprice, in finite capacity, or perfect timing - cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the CLEOPATRA programming language. CLEOPATRA features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. CLEOPATRA is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of CLEOPATRA has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
As new multi-party edge services are deployed on the Internet, application-layer protocols with complex communication models and event dependencies are increasingly being specified and adopted. To ensure that such protocols (and compositions thereof with existing protocols) do not result in undesirable behaviors (e.g., livelocks) there needs to be a methodology for the automated checking of the "safety" of these protocols. In this paper, we present ingredients of such a methodology. Specifically, we show how SPIN, a tool from the formal systems verification community, can be used to quickly identify problematic behaviors of application-layer protocols with non-trivial communication models—such as HTTP with the addition of the "100 Continue" mechanism. As a case study, we examine several versions of the specification for the Continue mechanism; our experiments mechanically uncovered multi-version interoperability problems, including some which motivated revisions of HTTP/1.1 and some which persist even with the current version of the protocol. One such problem resembles a classic degradation-of-service attack, but can arise between well-meaning peers. We also discuss how the methods we employ can be used to make explicit the requirements for hardening a protocol's implementation against potentially malicious peers, and for verifying an implementation's interoperability with the full range of allowable peer behaviors.
Resumo:
Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.
Resumo:
NetSketch is a tool that enables the specification of network-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system so as to retain sufficient enough details to enable future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis approach based on a strongly-typed, Domain-Specific Language (DSL) to specify network configurations at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we overview NetSketch, highlight its salient features, and illustrate how it could be used in applications, including the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications). In a companion paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity.
Resumo:
NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system while retaining sufficient information about it to carry out future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis. The compositional analysis is based on a strongly-typed Domain-Specific Language (DSL) for describing and reasoning about constrained-flow networks at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity. In a companion paper [6], we overview NetSketch, highlight its salient features, and illustrate how it could be used in two applications: the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications).
Resumo:
In work that involves mathematical rigor, there are numerous benefits to adopting a representation of models and arguments that can be supplied to a formal reasoning or verification system: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, accessibility has not been a priority in the design of formal verification tools that can provide these benefits. In earlier work [Lap09a], we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. This work expands one aspect of the earlier work by considering more extensively an essential capability for any formal reasoning system whose design is oriented around simulating the natural context: native support for a collection of mathematical relations that deal with common constructs in arithmetic and set theory. We provide a formal definition for a context of relations that can be used to both validate and assist formal reasoning activities. We provide a proof that any algorithm that implements this formal structure faithfully will necessary converge. Finally, we consider the efficiency of an implementation of this formal structure that leverages modular implementations of well-known data structures: balanced search trees and transitive closures of hypergraphs.
Resumo:
A computer model has been developed to optimize the performance of a 50kWp photovoltaic system which supplies electrical energy to a dairy farm at Fota Island in Cork Harbour. Optimization of the system involves maximising the efficiency and increasing the performance and reliability of each hardware unit. The model accepts horizontal insolation, ambient temperature, wind speed, wind direction and load demand as inputs. An optimization program uses the computer model to simulate the optimum operating conditions. From this analysis, criteria are established which are used to improve the photovoltaic system operation. This thesis describes the model concepts, the model implementation and the model verification procedures used during development. It also describes the techniques which are used during system optimization. The software, which is written in FORTRAN, is structured in modular units to provide logical and efficient programming. These modular units may also be used in the modelling and optimization of other photovoltaic systems.
Resumo:
Spoken language and learned song are complex communication behaviors found in only a few species, including humans and three groups of distantly related birds--songbirds, parrots, and hummingbirds. Despite their large phylogenetic distances, these vocal learners show convergent behaviors and associated brain pathways for vocal communication. However, it is not clear whether this behavioral and anatomical convergence is associated with molecular convergence. Here we used oligo microarrays to screen for genes differentially regulated in brain nuclei necessary for producing learned vocalizations relative to adjacent brain areas that control other behaviors in avian vocal learners versus vocal non-learners. A top candidate gene in our screen was a calcium-binding protein, parvalbumin (PV). In situ hybridization verification revealed that PV was expressed significantly higher throughout the song motor pathway, including brainstem vocal motor neurons relative to the surrounding brain regions of all distantly related avian vocal learners. This differential expression was specific to PV and vocal learners, as it was not found in avian vocal non-learners nor for control genes in learners and non-learners. Similar to the vocal learning birds, higher PV up-regulation was found in the brainstem tongue motor neurons used for speech production in humans relative to a non-human primate, macaques. These results suggest repeated convergent evolution of differential PV up-regulation in the brains of vocal learners separated by more than 65-300 million years from a common ancestor and that the specialized behaviors of learned song and speech may require extra calcium buffering and signaling.
Resumo:
Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. © 2014 ACM.
Resumo:
This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development Environments (DECADE). A brief discussion sets the background for IoT, and the development of the distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and quantitative analysis carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service architecture, combining a distributed data warehouse, web services for analysis agents, ontology agents and a verification engine, with a centrally verified outcome database maintained by certifying body for qualification/professional status.
Resumo:
The contract work has demonstrated that older data can be assessed and entered into the MR format. Older data has associated problems but is retrievable. The contract successfully imported all datasets as required. MNCR survey sheets fit well into the MR format. The data validation and verification process can be improved. A number of computerised short cuts can be suggested and the process made more intuitive. Such a move is vital if MR is to be adopted as a standard by the recording community both on a voluntary level and potentially by consultancies.
Resumo:
Coccolithophores are the largest source of calcium carbonate in the oceans and are considered to play an important role in oceanic carbon cycles. Current methods to detect the presence of coccolithophore blooms from Earth observation data often produce high numbers of false positives in shelf seas and coastal zones due to the spectral similarity between coccolithophores and other suspended particulates. Current methods are therefore unable to characterise the bloom events in shelf seas and coastal zones, despite the importance of these phytoplankton in the global carbon cycle. A novel approach to detect the presence of coccolithophore blooms from Earth observation data is presented. The method builds upon previous optical work and uses a statistical framework to combine spectral, spatial and temporal information to produce maps of coccolithophore bloom extent. Validation and verification results for an area of the north east Atlantic are presented using an in situ database (N = 432) and all available SeaWiFS data for 2003 and 2004. Verification results show that the approach produces a temporal seasonal signal consistent with biological studies of these phytoplankton. Validation using the in situ coccolithophore cell count database shows a high correct recognition rate of 80% and a low false-positive rate of 0.14 (in comparison to 63% and 0.34 respectively for the established, purely spectral approach). To guide its broader use, a full sensitivity analysis for the algorithm parameters is presented.