923 resultados para correctness verification


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Product Lifecycle Management (PLM) systems are widely used in the manufacturing industry. A core feature of such systems is to provide support for versioning of product data. As workflow functionality is increasingly used in PLM systems, the possibility emerges that the versioning transitions for product objects as encapsulated in process models do not comply with the valid version control policies mandated in the objects’ actual lifecycles. In this paper we propose a solution to tackle the (non-)compliance issues between processes and object version control policies. We formally define the notion of compliance between these two artifacts in product lifecycle management and then develop a compliance checking method which employs a well-established workflow analysis technique. This forms the basis of a tool which offers automated support to the proposed approach. By applying the approach to a collection of real-life specifications in a main PLM system, we demonstrate the practical applicability of our solution to the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Material yielding is typically modeled either by plastic zone or plastic hinge methods under the context of geometric and material nonlinear finite element methods. In fire analysis of steel structures, the plastic zone method is widely used, but it requires extensively more computational efforts. The objective of this paper is to develop the nonlinear material model allowing for interaction of both axial force and bending moment, which relies on the plastic hinge method to achieve numerical efficiency and reduce computational effort. The biggest advantage of the plastic-hinge approach is its computational efficiency and easy verification by the design code formulae of the axial force–moment interaction yield criterion for beam–column members. Further, the method is reliable and robust when used in analysis of practical and large structures. In order to allow for the effect of catenary action, axial thermal expansion is considered in the axial restraint equations. The yield function for material yielding incorporated in the stiffness formulation, which allows for both axial force and bending moment effects, is more accurate and rational to predict the behaviour of the frames under fire. In the present fire analysis, the mechanical properties at elevated temperatures follow mainly the Eurocode 3 [Design of steel structures, Part 1.2: Structural fire design. European Committee for Standisation; 2003]. Example of a tension member at a steady state heating condition is modeled to verify the proposed spring formulation and to compare with results by others. The behaviour of a heated member in a highly redundant structure is also studied by the present approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Object classification is plagued by the issue of session variation. Session variation describes any variation that makes one instance of an object look different to another, for instance due to pose or illumination variation. Recent work in the challenging task of face verification has shown that session variability modelling provides a mechanism to overcome some of these limitations. However, for computer vision purposes, it has only been applied in the limited setting of face verification. In this paper we propose a local region based intersession variability (ISV) modelling approach, and apply it to challenging real-world data. We propose a region based session variability modelling approach so that local session variations can be modelled, termed Local ISV. We then demonstrate the efficacy of this technique on a challenging real-world fish image database which includes images taken underwater, providing significant real-world session variations. This Local ISV approach provides a relative performance improvement of, on average, 23% on the challenging MOBIO, Multi-PIE and SCface face databases. It also provides a relative performance improvement of 35% on our challenging fish image dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present two unconditional secure protocols for private set disjointness tests. In order to provide intuition of our protocols, we give a naive example that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the intersection cardinality. More specifically, it discloses its lower bound. By using the Lagrange interpolation, we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. In this protocol, a verification test is applied to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are the first ones that have been designed without a generic secure function evaluation. More important, they are the most efficient protocols for private disjointness tests in the malicious adversary case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aiming at the large scale numerical simulation of particle reinforced materials, the concept of local Eshelby matrix has been introduced into the computational model of the eigenstrain boundary integral equation (BIE) to solve the problem of interactions among particles. The local Eshelby matrix can be considered as an extension of the concepts of Eshelby tensor and the equivalent inclusion in numerical form. Taking the subdomain boundary element method as the control, three-dimensional stress analyses are carried out for some ellipsoidal particles in full space with the proposed computational model. Through the numerical examples, it is verified not only the correctness and feasibility but also the high efficiency of the present model with the corresponding solution procedure, showing the potential of solving the problem of large scale numerical simulation of particle reinforced materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper makes a formal security analysis of the current Australian e-passport implementation using model checking tools CASPER/CSP/FDR. We highlight security issues in the current implementation and identify new threats when an e-passport system is integrated with an automated processing system like SmartGate. The paper also provides a security analysis of the European Union (EU) proposal for Extended Access Control (EAC) that is intended to provide improved security in protecting biometric information of the e-passport bearer. The current e-passport specification fails to provide a list of adequate security goals that could be used for security evaluation. We fill this gap; we present a collection of security goals for evaluation of e-passport protocols. Our analysis confirms existing security weaknesses that were previously identified and shows that both the Australian e-passport implementation and the EU proposal fail to address many security and privacy aspects that are paramount in implementing a secure border control mechanism. ACM Classification C.2.2 (Communication/Networking and Information Technology – Network Protocols – Model Checking), D.2.4 (Software Engineering – Software/Program Verification – Formal Methods), D.4.6 (Operating Systems – Security and Privacy Protection – Authentication)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, botnet, a network of compromised computers, has been recognized as the biggest threat to the Internet. The bots in a botnet communicate with the botnet owner via a communication channel called Command and Control (C & C) channel. There are three main C & C channels: Internet Relay Chat (IRC), Peer-to-Peer (P2P) and web-based protocols. By exploiting the flexibility of the Web 2.0 technology, the web-based botnet has reached a new level of sophistication. In August 2009, such botnet was found on Twitter, one of the most popular Web 2.0 services. In this paper, we will describe a new type of botnet that uses Web 2.0 service as a C & C channel and a temporary storage for their stolen information. We will then propose a novel approach to thwart this type of attack. Our method applies a unique identifier of the computer, an encryption algorithm with session keys and a CAPTCHA verification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasing the importance and use of infrastructures such as bridges, demands more effective structural health monitoring (SHM) systems. SHM has well addressed the damage detection issues through several methods such as modal strain energy (MSE). Many of the available MSE methods either have been validated for limited type of structures such as beams or their performance is not satisfactory. Therefore, it requires a further improvement and validation of them for different types of structures. In this study, an MSE method was mathematically improved to precisely quantify the structural damage at an early stage of formation. Initially, the MSE equation was accurately formulated considering the damaged stiffness and then it was used for derivation of a more accurate sensitivity matrix. Verification of the improved method was done through two plane structures: a steel truss bridge and a concrete frame bridge models that demonstrate the framework of a short- and medium-span of bridge samples. Two damage scenarios including single- and multiple-damage were considered to occur in each structure. Then, for each structure, both intact and damaged, modal analysis was performed using STRAND7. Effects of up to 5 per cent noise were also comprised. The simulated mode shapes and natural frequencies derived were then imported to a MATLAB code. The results indicate that the improved method converges fast and performs well in agreement with numerical assumptions with few computational cycles. In presence of some noise level, it performs quite well too. The findings of this study can be numerically extended to 2D infrastructures particularly short- and medium-span bridges to detect the damage and quantify it more accurately. The method is capable of providing a proper SHM that facilitates timely maintenance of bridges to minimise the possible loss of lives and properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amiton (O,O-diethyl-S-[2-(diethylamino)ethyl]phosphorothiolate), otherwise known as VG, is listed in schedule 2 of the Chemical Weapons Convention (CWC) and has a structure closely related to VX (O-ethyl-S-(2-diisopropylamino)ethylmethylphosphonothiolate). Fragmentation of protonated VG in the gas phase was performed using electrospray ionisation ion trap mass spectrometry (ESI-ITMS) and revealed several characteristic product ions. Quantum chemical calculations provide the most probable structures for these ions as well as the likely unimolecular mechanisms by which they are formed. The decomposition pathways predicted by computation are consistent with deuterium-labeling studies. The combination of experimental and theoretical data suggests that the fragmentation pathways of VG and analogous organophosphorus nerve agents, such as VX and Russian VX, are predictable and thus ESI tandem mass spectrometry is a powerful tool for the verification of unknown compounds listed in the CWC. Copyright (c) 2006 Commonwealth of Australia. Published by John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides a detailed description of the current Australian e-passport implementation and makes a formal verification using model checking tools CASPER/CSP/FDR. We highlight security issues present in the current e-passport implementation and identify new threats when an e-passport system is integrated with an automated processing systems like SmartGate. Because the current e-passport specification does not provide adequate security goals, to perform a rational security analysis we identify and describe a set of security goals for evaluation of e-passport protocols. Our analysis confirms existing security issues that were previously informally identified and presents weaknesses that exists in the current e-passport implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present efficient protocols for private set disjointness tests. We start from an intuition of our protocols that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the cardinality of the intersection. More specifically, it discloses its lower bound. By using the Lagrange interpolation we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. The protocol applies a verification test to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are first ones that have been designed without a generic secure function evaluation. More importantly, they are the most efficient protocols for private disjointness tests for the malicious adversary case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Process models define allowed process execution scenarios. The models are usually depicted as directed graphs, with gateway nodes regulating the control flow routing logic and with edges specifying the execution order constraints between tasks. While arbitrarily structured control flow patterns in process models complicate model analysis, they also permit creativity and full expressiveness when capturing non-trivial process scenarios. This paper gives a classification of arbitrarily structured process models based on the hierarchical process model decomposition technique. We identify a structural class of models consisting of block structured patterns which, when combined, define complex execution scenarios spanning across the individual patterns. We show that complex behavior can be localized by examining structural relations of loops in hidden unstructured regions of control flow. The correctness of the behavior of process models within these regions can be validated in linear time. These observations allow us to suggest techniques for transforming hidden unstructured regions into block-structured ones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analysis of behavioural consistency is an important aspect of software engineering. In process and service management, consistency verification of behavioural models has manifold applications. For instance, a business process model used as system specification and a corresponding workflow model used as implementation have to be consistent. Another example would be the analysis to what degree a process log of executed business operations is consistent with the corresponding normative process model. Typically, existing notions of behaviour equivalence, such as bisimulation and trace equivalence, are applied as consistency notions. Still, these notions are exponential in computation and yield a Boolean result. In many cases, however, a quantification of behavioural deviation is needed along with concepts to isolate the source of deviation. In this article, we propose causal behavioural profiles as the basis for a consistency notion. These profiles capture essential behavioural information, such as order, exclusiveness, and causality between pairs of activities of a process model. Consistency based on these profiles is weaker than trace equivalence, but can be computed efficiently for a broad class of models. In this article, we introduce techniques for the computation of causal behavioural profiles using structural decomposition techniques for sound free-choice workflow systems if unstructured net fragments are acyclic or can be traced back to S- or T-nets. We also elaborate on the findings of applying our technique to three industry model collections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Radiographic examinations of the ankle are important in the clinical management of ankle injuries in hospital emergency departments. National (Australian) Emergency Access Targets (NEAT) stipulate that 90 percent of presentations should leave the emergency department within 4 hours. For a radiological report to have clinical usefulness and relevance to clinical teams treating patients with ankle injuries in emergency departments, the report would need to be prepared and available to the clinical team within the NEAT 4 hour timeframe; before the patient has left the emergency department. However, little is known about the demand profile of ankle injuries requiring radiographic examination or time until radiological reports are available for this clinical group in Australian public hospital emergency settings. Methods This study utilised a prospective cohort of consecutive cases of ankle examinations from patients (n=437) with suspected traumatic ankle injuries presenting to the emergency department of a tertiary hospital facility. Time stamps from the hospital Picture Archiving and Communication System were used to record the timing of three processing milestones for each patient's radiographic examination; the time of image acquisition, time of a provisional radiological report being made available for viewing by referring clinical teams, and time of final verification of radiological report. Results Radiological reports and all three time stamps were available for 431 (98.6%) cases and were included in analysis. The total time between image acquisition and final radiological report verification exceeded 4?hours for 404 (92.5%) cases. The peak demand for radiographic examination of ankles was on weekend days, and in the afternoon and evening. The majority of examinations were provisionally reported and verified during weekday daytime shift hours. Conclusions Provisional or final radiological reports were frequently not available within 4 hours of image acquisition among this sample. Effective and cost-efficient strategies to improve the support provided to referring clinical teams from medical imaging departments may enhance emergency care interventions for people presenting to emergency departments with ankle injuries; particularly those with imaging findings that may be challenging for junior clinical staff to interpret without a definitive radiological report.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.