995 resultados para DYNAMIC VERIFICATION


Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose a dynamic verification approach for large-scale message passing programs to locate correctness bugs caused by unforeseen nondeterministic interactions. This approach hinges on an efficient protocol to track the causality between nondeterministic message receive operations and potentially matching send operations. We show that causality tracking protocols that rely solely on logical clocks fail to capture all nuances of MPI program behavior, including the variety of ways in which nonblocking calls can complete. Our approach is hinged on formally defining the matches-before relation underlying the MPI standard, and devising lazy update logical clock based algorithms that can correctly discover all potential outcomes of nondeterministic receives in practice. can achieve the same coverage as a vector clock based algorithm while maintaining good scalability. LLCP allows us to analyze realistic MPI programs involving a thousand MPI processes, incurring only modest overheads in terms of communication bandwidth, latency, and memory consumption. © 2011 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

On-time completion is an important temporal QoS (Quality of Service) dimension and one of the fundamental requirements for high-confidence workflow systems. In recent years, a workflow temporal verification framework, which generally consists of temporal constraint setting, temporal checkpoint selection, temporal verification, and temporal violation handling, has been the major approach for the high temporal QoS assurance of workflow systems. Among them, effective temporal checkpoint selection, which aims to timely detect intermediate temporal violations along workflow execution plays a critical role. Therefore, temporal checkpoint selection has been a major topic and has attracted significant efforts. In this paper, we will present an overview of work-flow temporal checkpoint selection for temporal verification. Specifically, we will first introduce the throughput based and response-time based temporal consistency models for business and scientific cloud workflow systems, respectively. Then the corresponding benchmarking checkpoint selection strategies that satisfy the property of “necessity and sufficiency” are presented. We also provide experimental results to demonstrate the effectiveness of our checkpoint selection strategies, and finally points out some possible future issues in this research area.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Workflow temporal verification is conducted to guarantee on-time completion, which is one of the most important QoS (Quality of Service) dimensions for business processes running in the cloud. However, as today's business systems often need to handle a large number of concurrent customer requests, conventional response-time based process monitoring strategies conducted in a one-by-one fashion cannot be applied efficiently to a large batch of parallel processes because of significant time overhead. Similar situations may also exist in software companies where multiple software projects are carried out at the same time by software developers. To address such a problem, based on a novel runtime throughput consistency model, this paper proposes a QoS-aware throughput based checkpoint selection strategy, which can dynamically select a small number of checkpoints along the system timeline to facilitate the temporal verification of throughput constraints and achieve the target on-time completion rate. Experimental results demonstrate that our strategy can achieve the best efficiency and effectiveness compared with the state-of-the-art as and other representative response-time based checkpoint selection strategies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Workflow systems have traditionally focused on the so-called production processes which are characterized by pre-definition, high volume, and repetitiveness. Recently, the deployment of workflow systems in non-traditional domains such as collaborative applications, e-learning and cross-organizational process integration, have put forth new requirements for flexible and dynamic specification. However, this flexibility cannot be offered at the expense of control, a critical requirement of business processes. In this paper, we will present a foundation set of constraints for flexible workflow specification. These constraints are intended to provide an appropriate balance between flexibility and control. The constraint specification framework is based on the concept of pockets of flexibility which allows ad hoc changes and/or building of workflows for highly flexible processes. Basically, our approach is to provide the ability to execute on the basis of a partially specified model, where the full specification of the model is made at runtime, and may be unique to each instance. The verification of dynamically built models is essential. Where as ensuring that the model conforms to specified constraints does not pose great difficulty, ensuring that the constraint set itself does not carry conflicts and redundancy is an interesting and challenging problem. In this paper, we will provide a discussion on both the static and dynamic verification aspects. We will also briefly present Chameleon, a prototype workflow engine that implements these concepts. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The cascading appearance-based (CAB) feature extraction technique has established itself as the state-of-the-art in extracting dynamic visual speech features for speech recognition. In this paper, we will focus on investigating the effectiveness of this technique for the related speaker verification application. By investigating the speaker verification ability of each stage of the cascade we will demonstrate that the same steps taken to reduce static speaker and environmental information for the visual speech recognition application also provide similar improvements for visual speaker recognition. A further study is conducted comparing synchronous HMM (SHMM) based fusion of CAB visual features and traditional perceptual linear predictive (PLP) acoustic features to show that higher complexity inherit in the SHMM approach does not appear to provide any improvement in the final audio-visual speaker verification system over simpler utterance level score fusion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The verification possibilities of dynamically collimated treatment beams with a scanning liquid ionization chamber electronic portal image device (SLIC-EPID) are investigated. The ion concentration in the liquid of a SLIC-EPID and therefore the read-out signal is determined by two parameters of a differential equation describing the creation and recombination of the ions. Due to the form of this equation, the portal image detector describes a nonlinear dynamic system with memory. In this work, the parameters of the differential equation were experimentally determined for the particular chamber in use and for an incident open 6 MV photon beam. The mathematical description of the ion concentration was then used to predict portal images of intensity-modulated photon beams produced by a dynamic delivery technique, the sliding window approach. Due to the nature of the differential equation, a mathematical condition for 'reliable leaf motion verification' in the sliding window technique can be formulated. It is shown that the time constants for both formation and decay of the equilibrium concentration in the chamber is in the order of seconds. In order to guarantee reliable leaf motion verification, these time constants impose a constraint on the rapidity of the image-read out for a given maximum leaf speed. For a leaf speed of 2 cm s(-1), a minimum image acquisition frequency of about 2 Hz is required. Current SLIC-EPID systems are usually too slow since they need about a second to acquire a portal image. However, if the condition is fulfilled, the memory property of the system can be used to reconstruct the leaf motion. It is shown that a simple edge detecting algorithm can be employed to determine the leaf positions. The method is also very robust against image noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is a continuation of the paper titled “Concurrent multi-scale modeling of civil infrastructure for analyses on structural deteriorating—Part I: Modeling methodology and strategy” with the emphasis on model updating and verification for the developed concurrent multi-scale model. The sensitivity-based parameter updating method was applied and some important issues such as selection of reference data and model parameters, and model updating procedures on the multi-scale model were investigated based on the sensitivity analysis of the selected model parameters. The experimental modal data as well as static response in terms of component nominal stresses and hot-spot stresses at the concerned locations were used for dynamic response- and static response-oriented model updating, respectively. The updated multi-scale model was further verified to act as the baseline model which is assumed to be finite-element model closest to the real situation of the structure available for the subsequent arbitrary numerical simulation. The comparison of dynamic and static responses between the calculated results by the final model and measured data indicated the updating and verification methods applied in this paper are reliable and accurate for the multi-scale model of frame-like structure. The general procedures of multi-scale model updating and verification were finally proposed for nonlinear physical-based modeling of large civil infrastructure, and it was applied to the model verification of a long-span bridge as an actual engineering practice of the proposed procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cascading appearance-based (CAB) feature extraction technique has established itself as the state of the art in extracting dynamic visual speech features for speech recognition. In this paper, we will focus on investigating the effectiveness of this technique for the related speaker verification application. By investigating the speaker verification ability of each stage of the cascade we will demonstrate that the same steps taken to reduce static speaker and environmental information for the speech recognition application also provide similar improvements for speaker recognition. These results suggest that visual speaker recognition can improve considerable when conducted solely through a consideration of the dynamic speech information rather than the static appearance of the speaker's mouth region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The method on concurrent multi-scale model of structural behavior (CMSM-of-SB) for the purpose of structural health monitoring including model updating and validating has been studied. The detailed process of model updating and validating is discussed in terms of reduced scale specimen of the steel box girder in longitudinal stiffening truss of a long span bridge. Firstly, some influence factors affecting the accuracy of the CMSM-of-SB including the boundary restraint regidity, the geometry and material parameters on the toe of the weld and its neighbor are analyzed using sensitivity method. Then, sensitivity-based model updating technology is adopted to update the developed CMSM-of-SB and model verification is carried out through calculating and comparing stresses on different locations under various loading from dynamic characteristic and static response. It can be concluded that the CMSM-of-SB based on the substructure method is valid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and purpose: The purpose of the work presented in this paper was to determine whether patient positioning and delivery errors could be detected using electronic portal images of intensity modulated radiotherapy (IMRT). Patients and methods: We carried out a series of controlled experiments delivering an IMRT beam to a humanoid phantom using both the dynamic and multiple static field method of delivery. The beams were imaged, the images calibrated to remove the IMRT fluence variation and then compared with calibrated images of the reference beams without any delivery or position errors. The first set of experiments involved translating the position of the phantom both laterally and in a superior/inferior direction a distance of 1, 2, 5 and 10 mm. The phantom was also rotated 1 and 28. For the second set of measurements the phantom position was kept fixed and delivery errors were introduced to the beam. The delivery errors took the form of leaf position and segment intensity errors. Results: The method was able to detect shifts in the phantom position of 1 mm, leaf position errors of 2 mm, and dosimetry errors of 10% on a single segment of a 15 segment IMRT step and shoot delivery (significantly less than 1% of the total dose). Conclusions: The results of this work have shown that the method of imaging the IMRT beam and calibrating the images to remove the intensity modulations could be a useful tool in verifying both the patient position and the delivery of the beam.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The precise shape of the three-dimensional dose distributions created by intensity-modulated radiotherapy means that the verification of patient position and setup is crucial to the outcome of the treatment. In this paper, we investigate and compare the use of two different image calibration procedures that allow extraction of patient anatomy from measured electronic portal images of intensity-modulated treatment beams. Methods and Materials: Electronic portal images of the intensity-modulated treatment beam delivered using the dynamic multileaf collimator technique were acquired. The images were formed by measuring a series of frames or segments throughout the delivery of the beams. The frames were then summed to produce an integrated portal image of the delivered beam. Two different methods for calibrating the integrated image were investigated with the aim of removing the intensity modulations of the beam. The first involved a simple point-by-point division of the integrated image by a single calibration image of the intensity-modulated beam delivered to a homogeneous polymethyl methacrylate (PMMA) phantom. The second calibration method is known as the quadratic calibration method and required a series of calibration images of the intensity-modulated beam delivered to different thicknesses of homogeneous PMMA blocks. Measurements were made using two different detector systems: a Varian amorphous silicon flat-panel imager and a Theraview camera-based system. The methods were tested first using a contrast phantom before images were acquired of intensity-modulated radiotherapy treatment delivered to the prostate and pelvic nodes of cancer patients at the Royal Marsden Hospital. Results: The results indicate that the calibration methods can be used to remove the intensity modulations of the beam, making it possible to see the outlines of bony anatomy that could be used for patient position verification. This was shown for both posterior and lateral delivered fields. Conclusions: Very little difference between the two calibration methods was observed, so the simpler division method, requiring only the single extra calibration measurement and much simpler computation, was the favored method. This new method could provide a complementary tool to existing position verification methods, and it has the advantage that it is completely passive, requiring no further dose to the patient and using only the treatment fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.