806 resultados para Difference logic
Resumo:
When reengineering legacy systems, it is crucial to assess if the legacy behavior has been preserved or how it changed due to the reengineering effort. Ideally if a legacy system is covered by tests, running the tests on the new version can identify potential differences or discrepancies. However, writing tests for an unknown and large system is difficult due to the lack of internal knowledge. It is especially difficult to bring the system to an appropriate state. Our solution is based on the acknowledgment that one of the few trustable piece of information available when approaching a legacy system is the running system itself. Our approach reifies the execution traces and uses logic programming to express tests on them. Thereby it eliminates the need to programatically bring the system in a particular state, and handles the test-writer a high-level abstraction mechanism to query the trace. The resulting system, called TESTLOG, was used on several real-world case studies to validate our claims.
Resumo:
Statistical approaches to evaluate higher order SNP-SNP and SNP-environment interactions are critical in genetic association studies, as susceptibility to complex disease is likely to be related to the interaction of multiple SNPs and environmental factors. Logic regression (Kooperberg et al., 2001; Ruczinski et al., 2003) is one such approach, where interactions between SNPs and environmental variables are assessed in a regression framework, and interactions become part of the model search space. In this manuscript we extend the logic regression methodology, originally developed for cohort and case-control studies, for studies of trios with affected probands. Trio logic regression accounts for the linkage disequilibrium (LD) structure in the genotype data, and accommodates missing genotypes via haplotype-based imputation. We also derive an efficient algorithm to simulate case-parent trios where genetic risk is determined via epistatic interactions.
Resumo:
Gas is trapped in polar ice sheets at ~50–120 m below the surface and is therefore younger than the surrounding ice. Firn densification models are used to evaluate this ice age-gas age difference (Δage) in the past. However, such models need to be validated by data, in particular for periods colder than present day on the East Antarctic plateau. Here we bring new constraints to test a firn densification model applied to the EPICA Dome C (EDC) site for the last 50 kyr, by linking the EDC ice core to the EPICA Dronning Maud Land (EDML) ice core, both in the ice phase (using volcanic horizons) and in the gas phase (using rapid methane variations). We also use the structured 10Be peak, occurring 41 kyr before present (BP) and due to the low geomagnetic field associated with the Laschamp event, to experimentally estimate the Δage during this event. Our results seem to reveal an overestimate of the Δage by the firn densification model during the last glacial period at EDC. Tests with different accumulation rates and temperature scenarios do not entirely resolve this discrepancy. Although the exact reasons for the Δage overestimate at the two EPICA sites remain unknown at this stage, we conclude that current densification model simulations have deficits under glacial climatic conditions. Whatever the cause of the Δage overestimate, our finding suggests that the phase relationship between CO2 and EDC temperature previously inferred for the start of the last deglaciation (lag of CO2 by 800±600 yr) seems to be overestimated.
Resumo:
Comments on an article by Kashima et al. (see record 2007-10111-001). In their target article Kashima and colleagues try to show how a connectionist model conceptualization of the self is best suited to capture the self's temporal and socio-culturally contextualized nature. They propose a new model and to support this model, the authors conduct computer simulations of psychological phenomena whose importance for the self has long been clear, even if not formally modeled, such as imitation, and learning of sequence and narrative. As explicated when we advocated connectionist models as a metaphor for self in Mischel and Morf (2003), we fully endorse the utility of such a metaphor, as these models have some of the processing characteristics necessary for capturing key aspects and functions of a dynamic cognitive-affective self-system. As elaborated in that chapter, we see as their principal strength that connectionist models can take account of multiple simultaneous processes without invoking a single central control. All outputs reflect a distributed pattern of activation across a large number of simple processing units, the nature of which depends on (and changes with) the connection weights between the links and the satisfaction of mutual constraints across these links (Rummelhart & McClelland, 1986). This allows a simple account for why certain input features will at times predominate, while others take over on other occasions. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Resumo:
The use of conventional orifice-plate meter is typically restricted to measurements of steady flows. This study proposes a new and effective computational-experimental approach for measuring the time-varying (but steady-in-the-mean) nature of turbulent pulsatile gas flows. Low Mach number (effectively constant density) steady-in-the-mean gas flows with large amplitude fluctuations (whose highest significant frequency is characterized by the value fF) are termed pulsatile if the fluctuations have a direct correlation with the time-varying signature of the imposed dynamic pressure difference and, furthermore, they have fluctuation amplitudes that are significantly larger than those associated with turbulence or random acoustic wave signatures. The experimental aspect of the proposed calibration approach is based on use of Coriolis-meters (whose oscillating arm frequency fcoriolis >> fF) which are capable of effectively measuring the mean flow rate of the pulsatile flows. Together with the experimental measurements of the mean mass flow rate of these pulsatile flows, the computational approach presented here is shown to be effective in converting the dynamic pressure difference signal into the desired dynamic flow rate signal. The proposed approach is reliable because the time-varying flow rate predictions obtained for two different orifice-plate meters exhibit the approximately same qualitative, dominant features of the pulsatile flow.
Resumo:
Transformer protection is one of the most challenging applications within the power system protective relay field. Transformers with a capacity rating exceeding 10 MVA are usually protected using differential current relays. Transformers are an aging and vulnerable bottleneck in the present power grid; therefore, quick fault detection and corresponding transformer de-energization is the key element in minimizing transformer damage. Present differential current relays are based on digital signal processing (DSP). They combine DSP phasor estimation and protective-logic-based decision making. The limitations of existing DSP-based differential current relays must be identified to determine the best protection options for sensitive and quick fault detection. The development, implementation, and evaluation of a DSP differential current relay is detailed. The overall goal is to make fault detection faster without compromising secure and safe transformer operation. A detailed background on the DSP differential current relay is provided. Then different DSP phasor estimation filters are implemented and evaluated based on their ability to extract desired frequency components from the measured current signal quickly and accurately. The main focus of the phasor estimation evaluation is to identify the difference between using non-recursive and recursive filtering methods. Then the protective logic of the DSP differential current relay is implemented and required settings made in accordance with transformer application. Finally, the DSP differential current relay will be evaluated using available transformer models within the ATP simulation environment. Recursive filtering methods were found to have significant advantage over non-recursive filtering methods when evaluated individually and when applied in the DSP differential relay. Recursive filtering methods can be up to 50% faster than non-recursive methods, but can cause false trip due to overshoot if the only objective is speed. The relay sensitivity is however independent of filtering method and depends on the settings of the relay’s differential characteristics (pickup threshold and percent slope).
Resumo:
BACKGROUND: Difference in pulse pressure (dPP) reliably predicts fluid responsiveness in patients. We have developed a respiratory variation (RV) monitoring device (RV monitor), which continuously records both airway pressure and arterial blood pressure (ABP). We compared the RV monitor measurements with manual dPP measurements. METHODS: ABP and airway pressure (PAW) from 24 patients were recorded. Data were fed to the RV monitor to calculate dPP and systolic pressure variation in two different ways: (a) considering both ABP and PAW (RV algorithm) and (b) ABP only (RV(slim) algorithm). Additionally, ABP and PAW were recorded intraoperatively in 10-min intervals for later calculation of dPP by manual assessment. Interobserver variability was determined. Manual dPP assessments were used for comparison with automated measurements. To estimate the importance of the PAW signal, RV(slim) measurements were compared with RV measurements. RESULTS: For the 24 patients, 174 measurements (6-10 per patient) were recorded. Six observers assessed dPP manually in the first 8 patients (10-min interval, 53 measurements); no interobserver variability occurred using a computer-assisted method. Bland-Altman analysis showed acceptable bias and limits of agreement of the 2 automated methods compared with the manual method (RV: -0.33% +/- 8.72% and RV(slim): -1.74% +/- 7.97%). The difference between RV measurements and RV(slim) measurements is small (bias -1.05%, limits of agreement 5.67%). CONCLUSIONS: Measurements of the automated device are comparable with measurements obtained by human observers, who use a computer-assisted method. The importance of the PAW signal is questionable.
Resumo:
PURPOSE: To assess family satisfaction in the ICU and to identify parameters for improvement. METHODS: Multicenter study in Swiss ICUs. Families were given a questionnaire covering overall satisfaction, satisfaction with care and satisfaction with information/decision-making. Demographic, medical and institutional data were gathered from patients, visitors and ICUs. RESULTS: A total of 996 questionnaires from family members were analyzed. Individual questions were assessed, and summary measures (range 0-100) were calculated, with higher scores indicating greater satisfaction. Summary score was 78 +/- 14 (mean +/- SD) for overall satisfaction, 79 +/- 14 for care and 77 +/- 15 for information/decision-making. In multivariable multilevel linear regression analyses, higher severity of illness was associated with higher satisfaction, while a higher patient:nurse ratio and written admission/discharge criteria were associated with lower overall satisfaction. Using performance-importance plots, items with high impact on overall satisfaction but low satisfaction were identified. They included: emotional support, providing understandable, complete, consistent information and coordination of care. CONCLUSIONS: Overall, proxies were satisfied with care and with information/decision-making. Still, several factors, such as emotional support, coordination of care and communication, are associated with poor satisfaction, suggesting the need for improvement. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00134-009-1611-4) contains supplementary material, which is available to authorized users.