925 resultados para Timing errors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Pelvic inflammatory disease (PID) results from the ascending spread of microorganisms from the vagina and endocervix to the upper genital tract. PID can lead to infertility, ectopic pregnancy and chronic pelvic pain. The timing of development of PID after the sexually transmitted bacterial infection Chlamydia trachomatis (chlamydia) might affect the impact of screening interventions, but is currently unknown. This study investigates three hypothetical processes for the timing of progression: at the start, at the end, or throughout the duration of chlamydia infection. Methods We develop a compartmental model that describes the trial structure of a published randomised controlled trial (RCT) and allows each of the three processes to be examined using the same model structure. The RCT estimated the effect of a single chlamydia screening test on the cumulative incidence of PID up to one year later. The fraction of chlamydia infected women who progress to PID is obtained for each hypothetical process by the maximum likelihood method using the results of the RCT. Results The predicted cumulative incidence of PID cases from all causes after one year depends on the fraction of chlamydia infected women that progresses to PID and on the type of progression. Progression at a constant rate from a chlamydia infection to PID or at the end of the infection was compatible with the findings of the RCT. The corresponding estimated fraction of chlamydia infected women that develops PID is 10% (95% confidence interval 7-13%) in both processes. Conclusions The findings of this study suggest that clinical PID can occur throughout the course of a chlamydia infection, which will leave a window of opportunity for screening to prevent PID.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to determine the optimal time interval for a repeated Chlamydia trachomatis (chlamydia) test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Decompressive craniectomy (DC) due to intractably elevated intracranial pressure mandates later cranioplasty (CP). However, the optimal timing of CP remains controversial. We therefore analyzed our prospectively conducted database concerning the timing of CP and associated post-operative complications. From October 1999 to August 2011, 280 cranioplasty procedures were performed at the authors' institution. Patients were stratified into two groups according to the time from DC to cranioplasty (early, ≤2 months, and late, >2 months). Patient characteristics, timing of CP, and CP-related complications were analyzed. Overall CP was performed early in 19% and late in 81%. The overall complication rate was 16.4%. Complications after CP included epidural or subdural hematoma (6%), wound healing disturbance (5.7%), abscess (1.4%), hygroma (1.1%), cerebrospinal fluid fistula (1.1%), and other (1.1%). Patients who underwent early CP suffered significantly more often from complications compared to patients who underwent late CP (25.9% versus 14.2%; p=0.04). Patients with ventriculoperitoneal (VP) shunt had a significantly higher rate of complications after CP compared to patients without VP shunt (p=0.007). On multivariate analysis, early CP, the presence of a VP shunt, and intracerebral hemorrhage as underlying pathology for DC, were significant predictors of post-operative complications after CP. We provide detailed data on surgical timing and complications for cranioplasty after DC. The present data suggest that patients who undergo late CP might benefit from a lower complication rate. This might influence future surgical decision making regarding optimal timing of cranioplasty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was (1) to determine frequency and type of medication errors (MEs), (2) to assess the number of MEs prevented by registered nurses, (3) to assess the consequences of ME for patients, and (4) to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses (n = 119) involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT) that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29%) indicated that there had been an ME. Registered nurses reported preventing 49 (5%) MEs. Overall, eight (2.8%) MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neurons generate spikes reliably with millisecond precision if driven by a fluctuating current--is it then possible to predict the spike timing knowing the input? We determined parameters of an adapting threshold model using data recorded in vitro from 24 layer 5 pyramidal neurons from rat somatosensory cortex, stimulated intracellularly by a fluctuating current simulating synaptic bombardment in vivo. The model generates output spikes whenever the membrane voltage (a filtered version of the input current) reaches a dynamic threshold. We find that for input currents with large fluctuation amplitude, up to 75% of the spike times can be predicted with a precision of +/-2 ms. Some of the intrinsic neuronal unreliability can be accounted for by a noisy threshold mechanism. Our results suggest that, under random current injection into the soma, (i) neuronal behavior in the subthreshold regime can be well approximated by a simple linear filter; and (ii) most of the nonlinearities are captured by a simple threshold process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calcium is a second messenger, which can trigger the modification of synaptic efficacy. We investigated the question of whether a differential rise in postsynaptic Ca2+ ([Ca2+]i) alone is sufficient to account for the induction of long-term potentiation (LTP) and long-term depression (LTD) of EPSPs in the basal dendrites of layer 2/3 pyramidal neurons of the somatosensory cortex. Volume-averaged [Ca2+]i transients were measured in spines of the basal dendritic arbor for spike-timing-dependent plasticity induction protocols. The rise in [Ca2+]i was uncorrelated to the direction of the change in synaptic efficacy, because several pairing protocols evoked similar spine [Ca2+]i transients but resulted in either LTP or LTD. The sequence dependence of near-coincident presynaptic and postsynaptic activity on the direction of changes in synaptic strength suggested that LTP and LTD were induced by two processes, which were controlled separately by postsynaptic [Ca2+]i levels. Activation of voltage-dependent Ca2+ channels before metabotropic glutamate receptors (mGluRs) resulted in the phospholipase C-dependent (PLC-dependent) synthesis of endocannabinoids, which acted as a retrograde messenger to induce LTD. LTP required a large [Ca2+]i transient evoked by NMDA receptor activation. Blocking mGluRs abolished the induction of LTD and uncovered the Ca2+-dependent induction of LTP. We conclude that the volume-averaged peak elevation of [Ca2+]i in spines of layer 2/3 pyramids determines the magnitude of long-term changes in synaptic efficacy. The direction of the change is controlled, however, via a mGluR-coupled signaling cascade. mGluRs act in conjunction with PLC as sequence-sensitive coincidence detectors when postsynaptic precede presynaptic action potentials to induce LTD. Thus presumably two different Ca2+ sensors in spines control the induction of spike-timing-dependent synaptic plasticity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Following vitrectomy for PVR-associated retinal detachment, placement of an encircling band, filling with silicone oil (SO) and successful retinal reattachment, a recurrence of PVR can develop. Retinal redetachment after SO removal is usually due to secondary or residual PVR. We wanted to ascertain whether the anatomical and functional outcomes of surgery in patients with a reattached retina and recurrent PVR can be improved by delaying the removal of SO. PATIENTS AND METHODS: 112 consecutive patients with PVR-associated retinal detachment who had undergone vitrectomy with SO filling, were monitored for at least 6 months after SO removal. Prior to SO removal, the retina posterior to the encircling band had to be completely reattached. Patients who developed PVR after SO filling were divided into two groups according to the duration of SO retention: 12 - 18 months (group 2: n = 48); > 18 months (group 3: n = 21). Individuals without PVR recurrence after SO filling and in whom the SO was consequently removed within 4 - 12 months served as control (group 1: n = 43). Anatomical success, intraocular pressure (IOP) and best-corrected visual acuity (BCVA) served as the primary clinical outcome parameters. RESULTS: Six months after SO removal, the anatomical success rates (86.3 %, 88.8 % and 84.6 %, in groups 1, 2 and 3, respectively; log rank = 0.794) and the BCVAs (p = 0.861) were comparable in the three groups. Mean IOP (p = 0.766), and the frequency of complications such as PVR recurrence (p = 0.936), bullous keratopathy (p = 0.981) and macular pucker (p = 0.943) were likewise similar. Patients in whom SO was retained for more than 18 months had the highest IOPs and required the heaviest dosage with anti-glaucoma drugs. CONCLUSIONS: In patients who develop a recurrence of PVR after vitrectomy and SO filling the surgeon can observe and treat retinal changes for up to 18 months without impairing the anatomical and functional outcomes. The retention of SO for more than 18 months does not improve the anatomical outcome. However, it can impair the functional outcome by precipitating the development of a persisting secondary glaucoma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-throughput SNP arrays provide estimates of genotypes for up to one million loci, often used in genome-wide association studies. While these estimates are typically very accurate, genotyping errors do occur, which can influence in particular the most extreme test statistics and p-values. Estimates for the genotype uncertainties are also available, although typically ignored. In this manuscript, we develop a framework to incorporate these genotype uncertainties in case-control studies for any genetic model. We verify that using the assumption of a “local alternative” in the score test is very reasonable for effect sizes typically seen in SNP association studies, and show that the power of the score test is simply a function of the correlation of the genotype probabilities with the true genotypes. We demonstrate that the power to detect a true association can be substantially increased for difficult to call genotypes, resulting in improved inference in association studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Early prenatal diagnosis and in utero therapy of certain fetal diseases have the potential to reduce fetal morbidity and mortality. The intrauterine transplantation of stem cells provides in some instances a therapeutic option before definitive organ failure occurs. Clinical experiences show that certain diseases, such as immune deficiencies or inborn errors of metabolism, can be successfully treated using stem cells derived from bone marrow. However, a remaining problem is the low level of engraftment that can be achieved. Efforts are made in animal models to optimise the graft and study the recipient's microenvironment to increase long-term engraftment levels. Our experiments in mice show similar early homing of allogeneic and xenogeneic stem cells and reasonable early engraftment of allogeneic murine fetal liver cells (17.1% donor cells in peripheral blood 4 weeks after transplantation), whereas xenogeneic HSC are rapidly diminished due to missing self-renewal and low differentiation capacities in the host's microenvironment. Allogeneic murine fetal liver cells have very good long-term engraftment (49.9% donor cells in peripheral blood 16 weeks after transplantation). Compared to the rodents, the sheep model has the advantage of body size and gestation comparable to the human fetus. Here, ultrasound-guided injection techniques significantly decreased fetal loss rates. In contrast to the murine in utero model, the repopulation capacities of allogeneic ovine fetal liver cells are lower (0.112% donor cells in peripheral blood 3 weeks after transplantation). The effect of MHC on engraftment levels seems to be marginal, since no differences could be observed between autologous and allogeneic transplantation (0.117% donor cells vs 0.112% donor cells in peripheral blood 1 to 2 weeks after transplantation). Further research is needed to study optimal timing and graft composition as well as immunological aspects of in utero transplantation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.