926 resultados para Event-based timing
Resumo:
Background: Hospital performance reports based on administrative data should distinguish differences in quality of care between hospitals from case mix related variation and random error effects. A study was undertaken to determine which of 12 diagnosis-outcome indicators measured across all hospitals in one state had significant risk adjusted systematic ( or special cause) variation (SV) suggesting differences in quality of care. For those that did, we determined whether SV persists within hospital peer groups, whether indicator results correlate at the individual hospital level, and how many adverse outcomes would be avoided if all hospitals achieved indicator values equal to the best performing 20% of hospitals. Methods: All patients admitted during a 12 month period to 180 acute care hospitals in Queensland, Australia with heart failure (n = 5745), acute myocardial infarction ( AMI) ( n = 3427), or stroke ( n = 2955) were entered into the study. Outcomes comprised in-hospital deaths, long hospital stays, and 30 day readmissions. Regression models produced standardised, risk adjusted diagnosis specific outcome event ratios for each hospital. Systematic and random variation in ratio distributions for each indicator were then apportioned using hierarchical statistical models. Results: Only five of 12 (42%) diagnosis-outcome indicators showed significant SV across all hospitals ( long stays and same diagnosis readmissions for heart failure; in-hospital deaths and same diagnosis readmissions for AMI; and in-hospital deaths for stroke). Significant SV was only seen for two indicators within hospital peer groups ( same diagnosis readmissions for heart failure in tertiary hospitals and inhospital mortality for AMI in community hospitals). Only two pairs of indicators showed significant correlation. If all hospitals emulated the best performers, at least 20% of AMI and stroke deaths, heart failure long stays, and heart failure and AMI readmissions could be avoided. Conclusions: Diagnosis-outcome indicators based on administrative data require validation as markers of significant risk adjusted SV. Validated indicators allow quantification of realisable outcome benefits if all hospitals achieved best performer levels. The overall level of quality of care within single institutions cannot be inferred from the results of one or a few indicators.
Resumo:
The efficiency of inhibitory control processes has been proposed as a mechanism constraining working-memory capacity. In order to investigate genetic influences on processes that may reflect interference control, event-related potential (ER-P) activity recorded at frontal sites, during distracting and nondistracting conditions of a working-memory task, in a sample of 509 twin pairs was examined. The ERP component of interest was the slow wave (SW). Considerable overlap in source of genetic influence was found, with a common genetic factor accounting for 37 - 45% of SW variance irrespective of condition. However, 3 - 8 % of SW variance in the distracting condition was influenced by an independent genetic source. These results suggest that neural responses to irrelevant and distracting information, that may disrupt working-memory performance, differ in a fundamental way from perceptual and memory-based processing in a working-memory task. Furthermore, the results are consistent with the view that cognition is a complex genetic trait influenced by numerous genes of small influence.
Resumo:
Firms have embraced electronic commerce as a means of doing business, either because they see it as a way to improve efficiency, grow market share, expand into new markets, or because they view it as essential for survival. Recent research in the United States provides some evidence that the market does value investments in electronic commerce. Following research that suggests that, in certain circumstances, the market values noninnovative investments as well as innovative investments in new products, we partition electronic commerce investment project announcements into innovative and noninnovative to determine whether there are excess returns associated with these types of announcements. Apart from our overall results being consistent with the United States findings that the market values investments in electronic commerce projects, we also find that noninnovative investments are perceived as more valuable to the firm than innovative investments. On average, the market expects innovative investments to earn a return commensurate with their risk. We conclude that innovative electronic commerce projects are most likely seen by the capital market as easily replicable, and consequently have little, if any, competitive advantage period. On the other hand, we conclude from the noninnovative investment results that these types of investments are seen as being compatible with a firm's assets-in-place, in particular, its information technology capabilities, a view consistent with the resource-based view of the firm.
Resumo:
Background: Few studies have examined the potential benefits of specialist nurse-led programs of care involving home and clinic-based follow-up to optimise the post-discharge management of chronic heart failure (CHF). Objective: To determine the effectiveness of a hybrid program of clinic plus home-based intervention (C+HBI) in reducing recurrent hospitalisation in CHF patients. Methods: CHF patients with evidence of left ventricular systolic dysfunction admitted to two hospitals in Northern England were assigned to a C+HBI lasting 6 months post-discharge (n=58) or to usual, post-discharge care (UC: n=48) via a cluster randomization protocol. The co-primary endpoints were death or unplanned readmission (event-free survival) and rate of recurrent, all-cause readmission within 6 months of hospital discharge. Results: During study follow-up, more UC patients had an unplanned readmission for any cause (44% vs. 22%: P=0.0191 OR 1.95 95% CI 1.10-3.48) whilst 7 (15%) versus 5 (9%) UC and C+HBI patients, respectively, died (P=NS). Overall, 15 (26%) C+HBI versus 21 (44%) UC patients experienced a primary endpoint. C+HBI was associated with a non-significant, 45% reduction in the risk of death or readmission when adjusting for potential confounders (RR 0.55, 95% CI 0.28-1.08: P=0.08). Overall, C+HBI patients accumulated significantly fewer unplanned readmissions (15 vs. 45: P
Resumo:
An appreciation of the physical mechanisms which cause observed seismicity complexity is fundamental to the understanding of the temporal behaviour of faults and single slip events. Numerical simulation of fault slip can provide insights into fault processes by allowing exploration of parameter spaces which influence microscopic and macroscopic physics of processes which may lead towards an answer to those questions. Particle-based models such as the Lattice Solid Model have been used previously for the simulation of stick-slip dynamics of faults, although mainly in two dimensions. Recent increases in the power of computers and the ability to use the power of parallel computer systems have made it possible to extend particle-based fault simulations to three dimensions. In this paper a particle-based numerical model of a rough planar fault embedded between two elastic blocks in three dimensions is presented. A very simple friction law without any rate dependency and no spatial heterogeneity in the intrinsic coefficient of friction is used in the model. To simulate earthquake dynamics the model is sheared in a direction parallel to the fault plane with a constant velocity at the driving edges. Spontaneous slip occurs on the fault when the shear stress is large enough to overcome the frictional forces on the fault. Slip events with a wide range of event sizes are observed. Investigation of the temporal evolution and spatial distribution of slip during each event shows a high degree of variability between the events. In some of the larger events highly complex slip patterns are observed.
Resumo:
The estimation of P(S-n > u) by simulation, where S, is the sum of independent. identically distributed random varibles Y-1,..., Y-n, is of importance in many applications. We propose two simulation estimators based upon the identity P(S-n > u) = nP(S, > u, M-n = Y-n), where M-n = max(Y-1,..., Y-n). One estimator uses importance sampling (for Y-n only), and the other uses conditional Monte Carlo conditioning upon Y1,..., Yn-1. Properties of the relative error of the estimators are derived and a numerical study given in terms of the M/G/1 queue in which n is replaced by an independent geometric random variable N. The conclusion is that the new estimators compare extremely favorably with previous ones. In particular, the conditional Monte Carlo estimator is the first heavy-tailed example of an estimator with bounded relative error. Further improvements are obtained in the random-N case, by incorporating control variates and stratification techniques into the new estimation procedures.
Resumo:
Background: Data on the long-term benefits of nonspecific disease management programs are limited. We performed a long-term follow-up of a previously published randomized trial. Methods: We compared all-cause mortality and recurrent hospitalization during median follow-up of 7.5 years in a heterogeneous cohort of patients with chronic illness initially exposed to a multidisciplinary, homebased intervention (HBI) (n = 260) or to usual postdischarge care (n = 268). Results: During follow-up, HBI had no impact on all-cause mortality (relative risk, 1.04; 95% confidence interval, 0.80-1.35) or event-free survival from death or unplanned hospitalization (relative risk, 1.03; 95% confidence interval, 0.86-1.24). Initial analysis suggested that HBI had only a marginal impact in reducing unplanned hospitalization, with 677 readmissions vs 824 for the usual care group (mean +/- SD rate, 0.72 +/- 0.96 vs 0.84 +/- 1.20 readmissions/patient per year; P = .08). When accounting for increased hospital activity in HBI patients with chronic obstructive pulmonary disease during follow-up for 2 years, post hoc analyses showed that HBI reduced readmissions by 14% within 2 years in patients without this condition (mean +/- SD rate, 0.54 +/- 0.72 vs 0.63 +/- 0.88 readmission/patient per year; P =. 04) and by 21% in all surviving patients within 3 to 8 years (mean +/- SD rate, 0.64 +/- 1.26 vs 0.81 +/- 1.61 readmissions/ patient per year; P =. 03). Overall, recurrent hospital costs were significantly lower ( 14%) in the HBI group (mean +/- SD, $ 823 +/- $ 1642 vs $ 960 +/- $ 1376 per patient per year; P =. 045). Conclusion: This unique study suggests that a nonspecific HBI provides long-term cost benefits in a range of chronic illnesses, except for chronic obstructive pulmonary disease.
Resumo:
Current Physiologically based pharmacokinetic (PBPK) models are inductive. We present an additional, different approach that is based on the synthetic rather than the inductive approach to modeling and simulation. It relies on object-oriented programming A model of the referent system in its experimental context is synthesized by assembling objects that represent components such as molecules, cells, aspects of tissue architecture, catheters, etc. The single pass perfused rat liver has been well described in evaluating hepatic drug pharmacokinetics (PK) and is the system on which we focus. In silico experiments begin with administration of objects representing actual compounds. Data are collected in a manner analogous to that in the referent PK experiments. The synthetic modeling method allows for recognition and representation of discrete event and discrete time processes, as well as heterogeneity in organization, function, and spatial effects. An application is developed for sucrose and antipyrine, administered separately and together PBPK modeling has made extensive progress in characterizing abstracted PK properties but this has also been its limitation. Now, other important questions and possible extensions emerge. How are these PK properties and the observed behaviors generated? The inherent heuristic limitations of traditional models have hindered getting meaningful, detailed answers to such questions. Synthetic models of the type described here are specifically intended to help answer such questions. Analogous to wet-lab experimental models, they retain their applicability even when broken apart into sub-components. Having and applying this new class of models along with traditional PK modeling methods is expected to increase the productivity of pharmaceutical research at all levels that make use of modeling and simulation.
Resumo:
Mudrocks and carbonates of the Isa superbasin in the Lawn Hill platform in northern Australia host major base metal sulfide mineralization, including the giant strata-bound Century Zn-Pb deposit. Mineral paragenesis, stable isotope, and K-Ar dating studies demonstrate that long-lived structures such as the Termite Range fault acted as hot fluid conduits several times during the Paleoproterozoic and Mesoproterozoic in response to major tectonic events. Illite and chlorite crystallinity studies suggest the southern part of the platform has experienced higher temperatures (up to 300 degrees C) than similar stratigraphic horizons in the north. The irregular downhole variation of illite crystallinity values provides further information oil the thermal regime in the basin and shows that clay formation was controlled not only by temperature increase with depth but also by high water/rock ratios along relatively permeable zones. K-Ar dating of illite, in combination with other data, may indicate three major thermal events in the central and northern Lawn Hill platform Lit 1500, 1440 to 1400, and 1250 to 1150 Ma. This study did not detect the earlier Century base metal mineralizing event at 1575 Ma. 1500 Ma ages are recorded only in the south and correspond to the age of the Late Isan orogeny and deposition of the Lower Roper superbasin. They may reflect exhumation of a provenance region. The 1440 to 1300 Ma ages are related to fault reactivation and a thermal pulse at similar to 1440 to 1400 Ma possibly accompanied by fluid flow, with subsequent enhanced cooling possibly due to thermal relaxation or further crustal exhumation. The youngest thermal and/or fluid-flow event at 1250 to 1150 Ma is recorded mainly to the cast of the Tern-lite Range fault and may be related to the assembly of the Rodinian supercontinent. Fluids in equilibrium with illite that formed over a range of temperatures, at different times in different parts of the platform. have relatively uniform oxygen isotope compositions and more variable hydrogen isotope compositions (delta O-18 = 3.5-9.7 parts per thousand V-SMOW; delta D = -94 to -36 parts per thousand V-SMOW). The extent of the 180 enrichment and the variably depleted hydrogen isotope compositions suggest the illite interacted with deep-basin hypersaline brines that were composed of evaporated seawater and/or highly evolved meteoric water. Siderite is the most abundant iron-rich gangue phase in the Century Zn-Pb deposit, which is surrounded by all extensive ferroan carbonate alteration halo. Modeling suggests that the ore siderite formed at temperatures of 120 degrees to 150 degrees C, whereas siderite and ankerite in the alteration halo formed at temperatures of 150 degrees to 180 degrees C. The calculated isotopic compositions of the fluids are consistent with O-18-rich basinal brines and mixed inorganic and organic carbon Sources (6180 = 3-10 parts per thousand V-SMOW, delta C-13 = -7 to -3 parts per thousand V-PDB). in the northeast Lawn Hill platform carbonate-rich rocks preserve marine to early diagenetic carbon and oxygen isotope compositions, whereas ferroan carbonate cements in siltstones and shales in the Desert Creek borehole are O-18 and C-13 depleted relative to the sedimentary carbonates. The good agreement between temperature estimates from illite crystallinity and organic reflectance (160 degrees-270 degrees C) and inverse correlation with carbonate delta O-18 values indicates that organic maturation and carbonate precipitation in the northeast Lawn Hill platform resulted from interaction with the 1250 to 1150 Ma fluids. The calculated isotopic compositions of the fluid are consistent with evolved basinal brine (delta O-18 = 5.1-9.4 parts per thousand V-SMOW; delta C-13 = -13.2 to -3.7 parts per thousand V-PDB) that contained a variable organic carbon component from the oxidation and/or hydrolysis of organic matter in the host sequence. The occurrence of extensive O-18- and C-13-depleted ankerite and siderite alteration in Desert Creek is related to the high temperature of the 1250 to 1150 Ma fluid-flow event in the northeast Lawn Hill platform, in contrast to the lower temperature fluids associated with the earlier Century Zn-Pb deposit in the central Lawn Hill platform.
Resumo:
The stratiform Century Zn-Pb deposit and the discordant Zn-Pb lode deposits of the Burketown mineral field, northern Australia, host ore and gangue minerals with primary fluid inclusions that have not been affected by the Isan orogeny, thus providing a unique opportunity to investigate the nature of the ore-forming brines. All of the deposits are hosted in shales and siltstones belonging to the Isa superbasin and comprise sphalerite, pyrite, carbonate, quartz, galena, minor chalcopyrite, and minor illite. According to Pb model ages, the main ore stage of mineralization at Century formed at I575 Ma, some 20 m.y. after deposition of the host shale sequence. Microthermometry on undeformed, primary fluid inclusions hosted in porous sphalerite shows that the Zn at Century was transported to the deposit by a homogeneous, Ca2+- and Na+-bearing brine with a salinity of 21.6 wt percent NaCl equiv. delta D-fluid of the fluid inclusion water ranges from -89 to -83 per mil, consistent with a basinal brine that evolved from meteoric water. Fluid inclusion homogenization temperatures range between 74 degrees and 125 degrees C, which are lower than the 120 degrees to 160 degrees C range calculated from vitrinite reflectance and illite crystallinity data from the deposit. This discrepancy indicates that mineralization likely formed at 50 to 85 Mpa, corresponding to a depth of 1,900 to 3,100 m. Transgressive galena-sphalerite veins that cut stratiform mineralization at Century and breccia-filled quartz-dolomite-sphalerite-galena veins in the discordant Zn-Pb lodes have Pb model ages between 1575 and 1485 Ma. Raman spectroscopy and microthermometry reveal that the primary fluid inclusions in these veins contain Ca2+, Na+. but they have lower salinities between 23 and 10 wt percent NaCl equiv and higher delta D-fluid values ranging from -89 to -61 per mil than fluid inclusions in porous sphalerite from Century. Fluid inclusion water from sphalerite in one of the lode deposits has delta O-18(fluid) values of 1.6 and 2.4 per mil, indistinguishable from delta O-18(fluid) values between -0.3 to +7.4 per mil calculated from the isotopic composition of coexisting quartz, dolomite, and illite. The trend toward lower salinities and higher delta D-fluid values relative to the earlier mineralizing fluids is attributed to mixing between the fluid that formed Century and a seawater-derived fluid from a different source. Based on seismic data from the Lawn Hill platform and paragenetic and geochemical results from the Leichhardt River fault trough to the south, diagenetic aquifers in the Underlying Calvert superbasin appear to have been the most likely sources for the fluids that formed Century and the discordant lode deposits. Paragenetically late sphalerite and calcite cut sphalerite, quartz, and dolomite in the lode deposits and contain Na+-dominated fluid inclusions with much lower salinities than their older counterparts. The isotopic composition of calcite also indicates delta O-18(fluid) from 3.3 to 10.7 per mil, which is larger than the range obtained from synmineralization minerals, supporting the idea that a unique fluid source was involved. The absolute timing of this event is unclear, but a plethora of Pb model, K-Ar, and Ar-40/Ar-39 ages between 1440 and 1300 Ma indicate that a significant volume of fluid was mobilized at this time. The deposition of the Roper superbasin from ca. 1492 +/- 4 Ma suggests that these late veins formed from fluids that may have been derived from aquifers in overlying sediments of the Roper superbasin. Clear, buck, and drusy quartz in veins unrelated to any form of Pb-Zn mineralization record the last major fluid event in the Burketown mineral field and form distinct outcrops and ridges in the district. Fluid inclusions in these veins indicate formation from a low-salinity, 300 degrees +/- 80 degrees C fluid. Temperatures approaching 300 degrees C recorded in organic matter adjacent to faults and at sequence boundaries correspond to K-Ar ages spanning 1300 to 1100 Ma, which coincides with regional hydrothermal activity in the northern Lawn Hill platform and the emplacement of the Lakeview Dolerite at the time of assemblage of the Rodinia supercontinent.
Resumo:
Pattern discovery in a long temporal event sequence is of great importance in many application domains. Most of the previous work focuses on identifying positive associations among time stamped event types. In this paper, we introduce the problem of defining and discovering negative associations that, as positive rules, may also serve as a source of knowledge discovery. In general, an event-oriented pattern is a pattern that associates with a selected type of event, called a target event. As a counter-part of previous research, we identify patterns that have a negative relationship with the target events. A set of criteria is defined to evaluate the interestingness of patterns associated with such negative relationships. In the process of counting the frequency of a pattern, we propose a new approach, called unique minimal occurrence, which guarantees that the Apriori property holds for all patterns in a long sequence. Based on the interestingness measures, algorithms are proposed to discover potentially interesting patterns for this negative rule problem. Finally, the experiment is made for a real application.
Resumo:
Studies have shown that an increase in arterial stiffening can indicate the presence of cardiovascular diseases like hypertension. Current gold standard in clinical practice is by measuring the blood pressure of patients using a mercury sphygmomanometer. However, the nature of this technique is not suitable for prolonged monitoring. It has been established that pulse wave velocity is a direct measure of arterial stiffening. However, its usefulness is hampered by the absence of techniques to estimate it non-invasively. Pulse transit time (PTT) is a simple and non-intrusive method derived from pulse wave velocity. It has shown its capability in childhood respiratory sleep studies. Recently, regression equations that can predict PTT values for healthy Caucasian children were formulated. However, its usefulness to identify hypertensive children based on mean PTT values has not been investigated. This was a continual study where 3 more Caucasian male children with known clinical hypertension were recruited. Results indicated that the PTT predictive equations are able to identify hypertensive children from their normal counterparts in a significant manner (p < 0.05). Hence, PTT can be a useful diagnostic tool in identifying hypertension in children and shows potential to be a non-invasive continual monitor for arterial stiffening.
Resumo:
Pattern discovery in temporal event sequences is of great importance in many application domains, such as telecommunication network fault analysis. In reality, not every type of event has an accurate timestamp. Some of them, defined as inaccurate events may only have an interval as possible time of occurrence. The existence of inaccurate events may cause uncertainty in event ordering. The traditional support model cannot deal with this uncertainty, which would cause some interesting patterns to be missing. A new concept, precise support, is introduced to evaluate the probability of a pattern contained in a sequence. Based on this new metric, we define the uncertainty model and present an algorithm to discover interesting patterns in the sequence database that has one type of inaccurate event. In our model, the number of types of inaccurate events can be extended to k readily, however, at a cost of increasing computational complexity.
Resumo:
This paper derives the performance union bound of space-time trellis codes in orthogonal frequency division multiplexing system (STTC-OFDM) over quasi-static frequency selective fading channels based on the distance spectrum technique. The distance spectrum is the enumeration of the codeword difference measures and their multiplicities by exhausted searching through all the possible error event paths. Exhaustive search approach can be used for low memory order STTC with small frame size. However with moderate memory order STTC and moderate frame size the computational cost of exhaustive search increases exponentially, and may become impractical for high memory order STTCs. This requires advanced computational techniques such as Genetic Algorithms (GAS). In this paper, a GA with sharing function method is used to locate the multiple solutions of the distance spectrum for high memory order STTCs. Simulation evaluates the performance union bound and the complexity comparison of non-GA aided and GA aided distance spectrum techniques. It shows that the union bound give a close performance measure at high signal-to-noise ratio (SNR). It also shows that GA sharing function method based distance spectrum technique requires much less computational time as compared with exhaustive search approach but with satisfactory accuracy.
Resumo:
In developing neural network techniques for real world applications it is still very rare to see estimates of confidence placed on the neural network predictions. This is a major deficiency, especially in safety-critical systems. In this paper we explore three distinct methods of producing point-wise confidence intervals using neural networks. We compare and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data. The problem domain is concerned with the calibration of a real automotive engine management system for both air-fuel ratio determination and on-line ignition timing. This problem requires real-time control and is a good candidate for exploring the use of confidence predictions due to its safety-critical nature.