961 resultados para conformance checking
Resumo:
The objective of this research is to investigate the consequences of sharing or using information generated in one phase of the project to subsequent life cycle phases. Sometimes the assumptions supporting the information change, and at other times the context within which the information was created changes in a way that causes the information to become invalid. Often these inconsistencies are not discovered till the damage has occurred. This study builds on previous research that proposed a framework based on the metaphor of ‘ecosystems’ to model such inconsistencies in the 'supply chain' of life cycle information (Brokaw and Mukherjee, 2012). The outcome of such inconsistencies often results in litigation. Therefore, this paper studies a set of legal cases that resulted from inconsistencies in life cycle information, within the ecosystems framework. For each project, the errant information type, creator and user of the information and their relationship, time of creation and usage of the information in the life cycle of the project are investigated to assess the causes of failure of precise and accurate information flow as well as the impact of such failures in later stages of the project. The analysis shows that the misleading information is mostly due to lack of collaboration. Besides, in all the studied cases, lack of compliance checking, imprecise data and insufficient clarifications hinder accurate and smooth flow of information. The paper presents findings regarding the bottleneck of the information flow process during the design, construction and post construction phases. It also highlights the role of collaboration as well as information integration and management during the project life cycle and presents a baseline for improvement in information supply chain through the life cycle of the project.
Resumo:
Lesion detection aids ideally aim at increasing the sensitivity of visual caries detection without trading off too much in terms of specificity. The use of a dental probe (explorer), bitewing radiography and fibre-optic transillumination (FOTI) have long been recommended for this purpose. Today, probing of suspected lesions in the sense of checking the 'stickiness' is regarded as obsolete, since it achieves no gain of sensitivity and might cause irreversible tooth damage. Bitewing radiography helps to detect lesions that are otherwise hidden from visual examination, and it should therefore be applied to a new patient. The diagnostic performance of radiography at approximal and occlusal sites is different, as this relates to the 3-dimensional anatomy of the tooth at these sites. However, treatment decisions have to take more into account than just lesion extension. Bitewing radiography provides additional information for the decision-making process that mainly relies on the visual and clinical findings. FOTI is a quick and inexpensive method which can enhance visual examination of all tooth surfaces. Both radiography and FOTI can improve the sensitivity of caries detection, but require sufficient training and experience to interpret information correctly. Radiography also carries the burden of the risks and legislation associated with using ionizing radiation in a health setting and should be repeated at intervals guided by the individual patient's caries risk. Lesion detection aids can assist in the longitudinal monitoring of the behaviour of initial lesions.
Resumo:
OBJECTIVE: The aim of this study was to establish and validate a three-dimensional imaging protocol for the assessment of Computed Tomography (CT) scans of abdominal aortic aneurysms in UK EVAR trials patients. Quality control and repeatability of anatomical measurements is important for the validity of any core laboratory. METHODS: Three different observers performed anatomical measurements on 50 preoperative CT scans of aortic aneurysms using the Vitrea 2 three-dimensional post-imaging software in a core laboratory setting. We assessed the accuracy of intra and inter observer repeatability of measurements, the time required for collection of measurements, 3 different levels of automation and 3 different automated criteria for measurement of neck length. RESULTS: None of the automated neck length measurements demonstrated sufficient accuracy and it was necessary to perform checking of the important automated landmarks. Good intra and limited inter observer agreement were achieved with three-dimensional assessment. Complete assessment of the aneurysm and iliacs took an average (SD) of 17.2 (4.1) minutes. CONCLUSIONS: Aortic aneurysm anatomy can be assessed reliably and quickly using three-dimensional assessment but for scans of limited quality, manual checking of important landmarks remains necessary. Using a set protocol, agreement between observers is satisfactory but not as good as within observers.
Resumo:
BACKGROUND: Enquiries among patients on the one hand and experimental and observational studies on the other suggest an influence of stress on inflammatory bowel diseases (IBD). However, since this influence remains hypothetical, further research is essential. We aimed to devise recommendations for future investigations in IBD by means of scrutinizing previously applied methodology. METHODS: We critically reviewed prospective clinical studies on the effect of psychological stress on IBD. Eligible studies were searched by means of the PubMed electronic library and through checking the bibliographies of located sources. RESULTS: We identified 20 publications resulting from 18 different studies. Sample sizes ranged between 10 and 155 participants. Study designs in terms of patient assessment, control variables, and applied psychometric instruments varied substantially across studies. Methodological strengths and weaknesses were irregularly dispersed. Thirteen studies reported significant relationships between stress and adverse outcomes. CONCLUSIONS: Study designs, including accuracy of outcome assessment and repeated sampling of outcomes (i.e. symptoms, clinical, and endoscopic), depended upon conditions like sample size, participants' compliance, and available resources. Meeting additional criteria of sound methodology, like taking into account covariates of the disease and its course, is strongly recommended to possibly improve study designs in future IBD research.
Resumo:
Most languages fall into one of two camps: either they adopt a unique, static type system, or they abandon static type-checks for run-time checks. Pluggable types blur this division by (i) making static type systems optional, and (ii) supporting a choice of type systems for reasoning about different kinds of static properties. Dynamic languages can then benefit from static-checking without sacrificing dynamic features or committing to a unique, static type system. But the overhead of adopting pluggable types can be very high, especially if all existing code must be decorated with type annotations before any type-checking can be performed. We propose a practical and pragmatic approach to introduce pluggable type systems to dynamic languages. First of all, only annotated code is type-checked. Second, limited type inference is performed on unannotated code to reduce the number of reported errors. Finally, external annotations can be used to type third-party code. We present Typeplug, a Smalltalk implementation of our framework, and report on experience applying the framework to three different pluggable type systems.
Resumo:
Die Bedrohung durch Produktpiraterie wächst ständig, besonders der deutsche Maschinen- und Anlagenbau ist mehr und mehr davon betroffen. Um Komponenten und Ersatzteile zu schützen, wurde ein technisches Konzept zur Abwehr von Produktpiraterie entwickelt. In diesem System werden Teile mit kopiersicheren Echtheitsmerkmalen gekennzeichnet, welche an diversen Identifikations- und Prüfpunkten entlang der Supply-Chain und besonders beim Einsatz in der Maschine ausgelesen und geprüft werden. Die Prüfergebnisse werden in einer zentralen Datenbank gespeichert, um neue Services zu ermöglichen und die Kommunikation zwischen Hersteller und Kunde zu erleichtern.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
BACKGROUND: This study aimed to investigate the influence of deep sternal wound infection on long-term survival following cardiac surgery. MATERIAL AND METHODS: In our institutional database we retrospectively evaluated medical records of 4732 adult patients who received open-heart surgery from January 1995 through December 2005. The predictive factors for DSWI were determined using logistic regression analysis. Then, each patient with deep sternal wound infection (DSWI) was matched with 2 controls without DSWI, according to the risk factors identified previously. After checking balance resulting from matching, short-term mortality was compared between groups using a paired test, and long-term survival was compared using Kaplan-Meier analysis and a Cox proportional hazard model. RESULTS: Overall, 4732 records were analyzed. The mean age of the investigated population was 69.3±12.8 years. DSWI occurred in 74 (1.56%) patients. Significant independent predictive factors for deep sternal infections were active smoking (OR 2.19, CI95 1.35-3.53, p=0.001), obesity (OR 1.96, CI95 1.20-3.21, p=0.007), and insulin-dependent diabetes mellitus (OR 2.09, CI95 1.05-10.06, p=0.016). Mean follow-up in the matched set was 125 months, IQR 99-162. After matching, in-hospital mortality was higher in the DSWI group (8.1% vs. 2.7% p=0.03), but DSWI was not an independent predictor of long-term survival (adjusted HR 1.5, CI95 0.7-3.2, p=0.33). CONCLUSIONS: The results presented in this report clearly show that post-sternotomy deep wound infection does not influence long-term survival in an adult general cardio-surgical patient population.
Resumo:
Checking the admissibility of quasiequations in a finitely generated (i.e., generated by a finite set of finite algebras) quasivariety Q amounts to checking validity in a suitable finite free algebra of the quasivariety, and is therefore decidable. However, since free algebras may be large even for small sets of small algebras and very few generators, this naive method for checking admissibility in Q is not computationally feasible. In this paper, algorithms are introduced that generate a minimal (with respect to a multiset well-ordering on their cardinalities) finite set of algebras such that the validity of a quasiequation in this set corresponds to admissibility of the quasiequation in Q. In particular, structural completeness (validity and admissibility coincide) and almost structural completeness (validity and admissibility coincide for quasiequations with unifiable premises) can be checked. The algorithms are illustrated with a selection of well-known finitely generated quasivarieties, and adapted to handle also admissibility of rules in finite-valued logics.
Resumo:
Historical, i.e. pre-1957, upper-air data are a valuable source of information on the state of the atmosphere, in some parts of the world dating back to the early 20th century. However, to date, reanalyses have only partially made use of these data, and only of observations made after 1948. Even for the period between 1948 (the starting year of the NCEP/NCAR (National Centers for Environmental Prediction/National Center for Atmospheric Research) reanalysis) and the International Geophysical Year in 1957 (the starting year of the ERA-40 reanalysis), when the global upper-air coverage reached more or less its current status, many observations have not yet been digitised. The Comprehensive Historical Upper-Air Network (CHUAN) already compiled a large collection of pre-1957 upper-air data. In the framework of the European project ERA-CLIM (European Reanalysis of Global Climate Observations), significant amounts of additional upper-air data have been catalogued (> 1.3 million station days), imaged (> 200 000 images) and digitised (> 700 000 station days) in order to prepare a new input data set for upcoming reanalyses. The records cover large parts of the globe, focussing on, so far, less well covered regions such as the tropics, the polar regions and the oceans, and on very early upper-air data from Europe and the US. The total number of digitised/inventoried records is 61/101 for moving upper-air data, i.e. data from ships, etc., and 735/1783 for fixed upper-air stations. Here, we give a detailed description of the resulting data set including the metadata and the quality checking procedures applied. The data will be included in the next version of CHUAN. The data are available at doi:10.1594/PANGAEA.821222
Resumo:
Strategies of cognitive control are helpful in reducing anxiety experienced during anticipation of unpleasant or potentially unpleasant events. We investigated the associated cerebral information processing underlying the use of a specific cognitive control strategy during the anticipation of affect-laden events. Using functional magnetic resonance imaging, we examined differential brain activity during anticipation of events of unknown and negative emotional valence in a group of eighteen healthy subjects that used a cognitive control strategy, similar to "reality checking" as used in psychotherapy, compared with a group of sixteen subjects that did not exert cognitive control. While expecting unpleasant stimuli, the "cognitive control" group showed higher activity in left medial and dorsolateral prefrontal cortex areas but reduced activity in the left extended amygdala, pulvinar/lateral geniculate nucleus and fusiform gyrus. Cognitive control during the "unknown" expectation was associated with reduced amygdalar activity as well and further with reduced insular and thalamic activity. The amygdala activations associated with cognitive control correlated negatively with the reappraisal scores of an emotion regulation questionnaire. The results indicate that cognitive control of particularly unpleasant emotions is associated with elevated prefrontal cortex activity that may serve to attenuate emotion processing in for instance amygdala, and, notably, in perception related brain areas.
Resumo:
We propose notions of calibration for probabilistic forecasts of general multivariate quantities. Probabilistic copula calibration is a natural analogue of probabilistic calibration in the univariate setting. It can be assessed empirically by checking for the uniformity of the copula probability integral transform (CopPIT), which is invariant under coordinate permutations and coordinatewise strictly monotone transformations of the predictive distribution and the outcome. The CopPIT histogram can be interpreted as a generalization and variant of the multivariate rank histogram, which has been used to check the calibration of ensemble forecasts. Climatological copula calibration is an analogue of marginal calibration in the univariate setting. Methods and tools are illustrated in a simulation study and applied to compare raw numerical model and statistically postprocessed ensemble forecasts of bivariate wind vectors.
Resumo:
Code clone detection helps connect developers across projects, if we do it on a large scale. The cornerstones that allow clone detection to work on a large scale are: (1) bad hashing (2) lightweight parsing using regular expressions and (3) MapReduce pipelines. Bad hashing means to determine whether or not two artifacts are similar by checking whether their hashes are identical. We show a bad hashing scheme that works well on source code. Lightweight parsing using regular expressions is our technique of obtaining entire parse trees from regular expressions, robustly and efficiently. We detail the algorithm and implementation of one such regular expression engine. MapReduce pipelines are a way of expressing a computation such that it can automatically and simply be parallelized. We detail the design and implementation of one such MapReduce pipeline that is efficient and debuggable. We show a clone detector that combines these cornerstones to detect code clones across all projects, across all versions of each project.
Resumo:
BACKGROUND: Central and peripheral vision is needed for object detection. Previous research has shown that visual target detection is affected by age. In addition, light conditions also influence visual exploration. The aim of the study was to investigate the effects of age and different light conditions on visual exploration behavior and on driving performance during simulated driving. METHODS: A fixed-base simulator with 180 degree field of view was used to simulate a motorway route under daylight and night conditions to test 29 young subjects (25-40 years) and 27 older subjects (65-78 years). Drivers' eye fixations were analyzed and assigned to regions of interests (ROI) such as street, road signs, car ahead, environment, rear view mirror, side mirror left, side mirror right, incoming car, parked car, road repair. In addition, lane-keeping and driving speed were analyzed as a measure of driving performance. RESULTS: Older drivers had longer fixations on the task relevant ROI, but had a lower frequency of checking mirrors when compared to younger drivers. In both age groups, night driving led to a less fixations on the mirror. At the performance level, older drivers showed more variation in driving speed and lane-keeping behavior, which was especially prominent at night. In younger drivers, night driving had no impact on driving speed or lane-keeping behavior. CONCLUSIONS: Older drivers' visual exploration behavior are more fixed on the task relevant ROI, especially at night, when driving performance becomes more heterogeneous than in younger drivers.
Resumo:
The first operations at the new High-altitude Maïdo Observatory at La Réunion began in 2013. The Maïdo Lidar Calibration Campaign (MALICCA) was organized there in April 2013 and has focused on the validation of the thermodynamic parameters (temperature, water vapor, and wind) measured with many instruments including the new very large lidar for water vapor and temperature profiles. The aim of this publication consists of providing an overview of the different instruments deployed during this campaign and their status, some of the targeted scientific questions and associated instrumental issues. Some specific detailed studies for some individual techniques were addressed elsewhere. This study shows that temperature profiles were obtained from the ground to the mesopause (80 km) thanks to the lidar and regular meteorological balloon-borne sondes with an overlap range showing good agreement. Water vapor is also monitored from the ground to the mesopause by using the Raman lidar and microwave techniques. Both techniques need to be pushed to their limit to reduce the missing range in the lower stratosphere. Total columns obtained from global positioning system or spectrometers are valuable for checking the calibration and ensuring vertical continuity. The lidar can also provide the vertical cloud structure that is a valuable complementary piece of information when investigating the water vapor cycle. Finally, wind vertical profiles, which were obtained from sondes, are now also retrieved at Maïdo from the newly implemented microwave technique and the lidar. Stable calibrations as well as a small-scale dynamical structure are required to monitor the thermodynamic state of the middle atmosphere, ensure validation of satellite sensors, study the transport of water vapor in the vicinity of the tropical tropopause and study their link with cirrus clouds and cyclones and the impact of small-scale dynamics (gravity waves) and their link with the mean state of the mesosphere.