961 resultados para Compositional Verification
Resumo:
This paper presents a framework for compositional verification of Object-Z specifications. Its key feature is a proof rule based on decomposition of hierarchical Object-Z models. For each component in the hierarchy local properties are proven in a single proof step. However, we do not consider components in isolation. Instead, components are envisaged in the context of the referencing super-component and proof steps involve assumptions on properties of the sub-components. The framework is defined for Linear Temporal Logic (LTL)
Resumo:
Software product line modeling aims at capturing a set of software products in an economic yet meaningful way. We introduce a class of variability models that capture the sharing between the software artifacts forming the products of a software product line (SPL) in a hierarchical fashion, in terms of commonalities and orthogonalities. Such models are useful when analyzing and verifying all products of an SPL, since they provide a scheme for divide-and-conquer-style decomposition of the analysis or verification problem at hand. We define an abstract class of SPLs for which variability models can be constructed that are optimal w.r.t. the chosen representation of sharing. We show how the constructed models can be fed into a previously developed algorithmic technique for compositional verification of control-flow temporal safety properties, so that the properties to be verified are iteratively decomposed into simpler ones over orthogonal parts of the SPL, and are not re-verified over the shared parts. We provide tool support for our technique, and evaluate our tool on a small but realistic SPL of cash desks.
Resumo:
The present thesis work focuses on hole doped lanthanum manganites and their thin film forms. Hole doped lanthanum manganites with higher substitutions of sodium are seldom reported in literature. Such high sodium substituted lanthanum manganites are synthesized and a detailed investigation on their structural and magnetic properties is carried out. Magnetic nature of these materials near room temperature is investigated explicitly. Magneto caloric application potential of these materials are also investigated. After a thorough investigation of the bulk samples, thin films of the bulk counterparts are also investigated. A magnetoelectric composite with ferroelectric and ferromagnetic components is developed using pulsed laser deposition and the variation in the magnetic and electric properties are investigated. It is established that such a composite could be realized as a potential field effect device. The central theme of this thesis is also on manganites and is with the twin objectives of a material study leading to the demonstration of a device. This is taken up for investigation. Sincere efforts are made to synthesize phase pure compounds. Their structural evaluation, compositional verification and evaluation of ferroelectric and ferromagnetic properties are also taken up. Thus the focus of this investigation is related to the investigation of a magnetoelectric and magnetocaloric application potentials of doped lanthanum manganites with sodium substitution. Bulk samples of sodium substituted lanthanum manganites. Bulk samples of sodium substituted lanthanum manganites with Na substitution ranging from 50 percent to 90 percent were synthesized using a modified citrate gel method and were found to be orthorhombic in structure belonging to a pbnm spacegroup. The variation in lattice parameters and unit cell volume with sodium concentration were also dealt with. Magnetic measurements revealed that magnetization decreased with increase in sodium concentrations.
Resumo:
Over the past decades several approaches for schedulability analysis have been proposed for both uni-processor and multi-processor real-time systems. Although different techniques are employed, very little has been put forward in using formal specifications, with the consequent possibility for mis-interpretations or ambiguities in the problem statement. Using a logic based approach to schedulability analysis in the design of hard real-time systems eases the synthesis of correct-by-construction procedures for both static and dynamic verification processes. In this paper we propose a novel approach to schedulability analysis based on a timed temporal logic with time durations. Our approach subsumes classical methods for uni-processor scheduling analysis over compositional resource models by providing the developer with counter-examples, and by ruling out schedules that cause unsafe violations on the system. We also provide an example showing the effectiveness of our proposal.
Resumo:
23rd International Conference on Real-Time Networks and Systems (RTNS 2015). 4 to 6, Nov, 2015, Main Track. Lille, France. Best Paper Award Nominee
Resumo:
There is an increasing emphasis on the use of software to control safety critical plants for a wide area of applications. The importance of ensuring the correct operation of such potentially hazardous systems points to an emphasis on the verification of the system relative to a suitably secure specification. However, the process of verification is often made more complex by the concurrency and real-time considerations which are inherent in many applications. A response to this is the use of formal methods for the specification and verification of safety critical control systems. These provide a mathematical representation of a system which permits reasoning about its properties. This thesis investigates the use of the formal method Communicating Sequential Processes (CSP) for the verification of a safety critical control application. CSP is a discrete event based process algebra which has a compositional axiomatic semantics that supports verification by formal proof. The application is an industrial case study which concerns the concurrent control of a real-time high speed mechanism. It is seen from the case study that the axiomatic verification method employed is complex. It requires the user to have a relatively comprehensive understanding of the nature of the proof system and the application. By making a series of observations the thesis notes that CSP possesses the scope to support a more procedural approach to verification in the form of testing. This thesis investigates the technique of testing and proposes the method of Ideal Test Sets. By exploiting the underlying structure of the CSP semantic model it is shown that for certain processes and specifications the obligation of verification can be reduced to that of testing the specification over a finite subset of the behaviours of the process.
Resumo:
Milkfat-soybean oil blends were enzymatically interesterified (EIE) by Aspergillus niger lipase immobilized on SiO(2)-PVA hybrid composite in a solvent free system. An experimental mixture design was used to study the effects of binary blends of milkfat-soybean oil (MF:SBO) at different proportions (0:100; 25:75; 33:67; 50:50; 67:33; 75:25; 100:0) on the compositional and textural properties of the EIE products, considering, as response variables, the interesterification yield (IY), consistency and hardness. Lipase-catalysed interesterification reactions increased the relative proportion of TAGs` C(46)-C(52) and decreased the TAGs` C(40)-C(42) and C(54) concentrations. The highest IY was attained (10.8%) for EIE blend of MF:SBO 67:33 resulting in a more spreadable material at refrigerator temperature in comparison with butter, milkfat or non-interesterified (NIE) blend. In this case, consistency and hardness values were at least 32% lower than values measured for butter. Thus, using A. niger lipase immobilized on SiO(2)-PVA improves the textural properties of milkfat and has potential for development of a product incorporating unsaturated and essential fatty acids from soybean oil. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The development of Nb(3)Al and Nb(3)Sn superconductors is of great interest for the applied superconductivity area. These intermetallics composites are obtained normally by heat treatment reactions at high temperature. Processes that allow formation of the superconducting phases at lower temperatures (<1000 degrees C), particularly for Nb(3)Al, are of great interest. The present work studies phase formation and stability of Nb(3)Al and Nb(3)Sn superconducting phases using mechanical alloying (high energy ball milling). Our main objective was to form composites near stoichiometry, which could be transformed into the superconducting phases using low-temperature heat treatments. High purity Nb-Sn and Nb-Al powders were mixed to generate the required superconducting phases (Nb-25at.%Sn and Nb-25at.%Al) in an argon atmosphere glove-box. After milling in a Fritsch mill, the samples were compressed in a hydraulic uniaxial press and encapsulated in evacuated quartz tubes for heat treatment. The compressed and heat treated samples were characterized using X-ray diffractometry. Microstructure and chemical analysis were accomplished using scanning electron microscopy and energy dispersive spectrometry. Nb(3)Al XRD peaks were observed after the sintering at 800 degrees C for the sample milled for 30 h. Nb(3)Sn XRD peaks could be observed even before the heat treatment. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The different types of thermal crystallisation behaviours observed during continuous heating of M-based metallic glasses have been successfully associated with the topological instability. criterion, which is simply calculated from the alloy composition and metallic radii of the alloying elements and aluminium. In the present work, we report on new results evidencing the correlation between the values of X and the crystallisation behaviours in Al-based alloys of the Al-Ni-Ce system and we compare the glass-forming abilities of alloys designed with compositions corresponding to the same topological instability condition. The results are discussed in terms of compositional and topological aspects emphasizing the relevance of the different types of clusters in the amorphous phase in defining the stability of the glass and the types of thermal crystallisation. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents results on a verification test of a Direct Numerical Simulation code of mixed high-order of accuracy using the method of manufactured solutions (MMS). This test is based on the formulation of an analytical solution for the Navier-Stokes equations modified by the addition of a source term. The present numerical code was aimed at simulating the temporal evolution of instability waves in a plane Poiseuille flow. The governing equations were solved in a vorticity-velocity formulation for a two-dimensional incompressible flow. The code employed two different numerical schemes. One used mixed high-order compact and non-compact finite-differences from fourth-order to sixth-order of accuracy. The other scheme used spectral methods instead of finite-difference methods for the streamwise direction, which was periodic. In the present test, particular attention was paid to the boundary conditions of the physical problem of interest. Indeed, the verification procedure using MMS can be more demanding than the often used comparison with Linear Stability Theory. That is particularly because in the latter test no attention is paid to the nonlinear terms. For the present verification test, it was possible to manufacture an analytical solution that reproduced some aspects of an instability wave in a nonlinear stage. Although the results of the verification by MMS for this mixed-order numerical scheme had to be interpreted with care, the test was very useful as it gave confidence that the code was free of programming errors. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Chloride attack in marine environments or in structures where deicing salts are used will not always show profiles with concentrations that decrease from the external surface to the interior of the concrete. Some profiles show an increase in chloride concentrations from when a peak is formed. This type of profile must be analyzed in a different way from the traditional model of Fick`s second law to generate more precise service life models. A model for forecasting the penetration of chloride ions as a function of time for profiles having formed a peak. To confirm the efficiency of this model, it is necessary to observe the behavior of a chloride profile with peak in a specific structure over a period of time. To achieve this, two chloride profiles with different ages (22 and 27 years) were extracted from the same structure. The profile obtained from the 22-year sample was used to estimate the chloride profile at 27 years using three models: a) the traditional model using Fick`s second law and extrapolating the value of C(S)-external surface chloride concentration; b) the traditional model using Fick`s second law and shifting the x-axis to the peak depth; c) the previously proposed model. The results from these models were compared with the actual profile measured in the 27-year sample and the results were analyzed. The model was presented with good precision for this study of case, requiring to be tested with other structures in use.
Resumo:
A large percentage of pile caps support only one column, and the pile caps in turn are supported by only a few piles. These are typically short and deep members with overall span-depth ratios of less than 1.5. Codes of practice do not provide uniform treatment for the design of these types of pile caps. These members have traditionally been designed as beams spanning between piles with the depth selected to avoid shear failures and the amount of longitudinal reinforcement selected to provide sufficient flexural capacity as calculated by the engineering beam theory. More recently, the strut-and-tie method has been used for the design of pile caps (disturbed or D-region) in which the load path is envisaged to be a three-dimensional truss, with compressive forces being supported by concrete compressive struts between the column and piles and tensile forces being carried by reinforcing steel located between piles. Both of these models have not provided uniform factors of safety against failure or been able to predict whether failure will occur by flexure (ductile mode) or shear (fragile mode). In this paper, an analytical model based on the strut-and-tie approach is presented. The proposed model has been calibrated using an extensive experimental database of pile caps subjected to compression and evaluated analytically for more complex loading conditions. It has been proven to be applicable across a broad range of test data and can predict the failures modes, cracking, yielding, and failure loads of four-pile caps with reasonable accuracy.
Resumo:
Conventional procedures used to assess the integrity of corroded piping systems with axial defects generally employ simplified failure criteria based upon a plastic collapse failure mechanism incorporating the tensile properties of the pipe material. These methods establish acceptance criteria for defects based on limited experimental data for low strength structural steels which do not necessarily address specific requirements for the high grade steels currently used. For these cases, failure assessments may be overly conservative or provide significant scatter in their predictions, which lead to unnecessary repair or replacement of in-service pipelines. Motivated by these observations, this study examines the applicability of a stress-based criterion based upon plastic instability analysis to predict the failure pressure of corroded pipelines with axial defects. A central focus is to gain additional insight into effects of defect geometry and material properties on the attainment of a local limit load to support the development of stress-based burst strength criteria. The work provides an extensive body of results which lend further support to adopt failure criteria for corroded pipelines based upon ligament instability analyses. A verification study conducted on burst testing of large-diameter pipe specimens with different defect length shows the effectiveness of a stress-based criterion using local ligament instability in burst pressure predictions, even though the adopted burst criterion exhibits a potential dependence on defect geometry and possibly on material`s strain hardening capacity. Overall, the results presented here suggests that use of stress-based criteria based upon plastic instability analysis of the defect ligament is a valid engineering tool for integrity assessments of pipelines with axial corroded defects. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Using spontaneous parametric down-conversion, we produce polarization-entangled states of two photons and characterize them using two-photon tomography to measure the density matrix. A controllable decoherence is imposed on the states by passing the photons through thick, adjustable birefringent elements. When the system is subject to collective decoherence, one particular entangled state is seen to be decoherence-free, as predicted by theory. Such decoherence-free systems may have an important role for the future of quantum computation and information processing.