912 resultados para ERROR THRESHOLD
Resumo:
Statement of the problem and public health significance. Hospitals were designed to be a safe haven and respite from disease and illness. However, a large body of evidence points to preventable errors in hospitals as the eighth leading cause of death among Americans. Twelve percent of Americans, or over 33.8 million people, are hospitalized each year. This population represents a significant portion of at risk citizens exposed to hospital medical errors. Since the number of annual deaths due to hospital medical errors is estimated to exceed 44,000, the magnitude of this tragedy makes it a significant public health problem. ^ Specific aims. The specific aims of this study were threefold. First, this study aimed to analyze the state of the states' mandatory hospital medical error reporting six years after the release of the influential IOM report, "To Err is Human." The second aim was to identify barriers to reporting of medical errors by hospital personnel. The third aim was to identify hospital safety measures implemented to reduce medical errors and enhance patient safety. ^ Methods. A descriptive, longitudinal, retrospective design was used to address the first stated objective. The study data came from the twenty-one states with mandatory hospital reporting programs which report aggregate hospital error data that is accessible to the public by way of states' websites. The data analysis included calculations of expected number of medical errors for each state according to IOM rates. Where possible, a comparison was made between state reported data and the calculated IOM expected number of errors. A literature review was performed to achieve the second study aim, identifying barriers to reporting medical errors. The final aim was accomplished by telephone interviews of principal patient safety/quality officers from five Texas hospitals with more than 700 beds. ^ Results. The state medical error data suggests vast underreporting of hospital medical errors to the states. The telephone interviews suggest that hospitals are working at reducing medical errors and creating safer environments for patients. The literature review suggests the underreporting of medical errors at the state level stems from underreporting of errors at the delivery level. ^
Resumo:
Medication errors, one of the most frequent types of medical errors, are a common cause of patient harm in hospital systems today. Nurses at the bedside are in a position to encounter many of these errors since they are there at the start of the process (ordering/prescribing) and the end of the process (administration). One of the recommendations from the IOM (Institute of Medicine) report, "To Err is Human," was for organizations to identify and learn from medical errors through event reporting systems. While many organizations have reporting systems in place, research studies report a significant amount of underreporting by nurses. A systematic review of the literature was performed to identify contributing factors related to the reporting and not reporting of medication errors by nurses at the bedside.^ Articles included in the literature review were primary or secondary studies, dated January 1, 2000 – July 2009, related to nursing medication error reporting. All 634 articles were reviewed with an algorithm developed to standardize the review process and help filter out those that did not meet the study criteria. In addition, 142 article bibliographies were reviewed to find additional studies that were not found in the original literature search.^ After reviewing the 634 articles and the additional 108 articles discovered in the bibliography review, 41 articles met the study criteria and were used in the systematic literature review results.^ Fear of punitive reactions to medication errors was a frequent barrier to error reporting. Nurses fear reactions from their leadership, peers, patients and their families, nursing boards, and the media. Anonymous reporting systems and departments/organizations with a strong safety culture in place helped to encourage the reporting of medication errors by nursing staff.^ Many of the studies included in this literature review do not allow results that can be generalized. The majority of them took place in single institutions/organizations with limited sample sizes. Stronger studies with larger sample sizes need to be performed, utilizing data collection methods that have been validated, to determine stronger correlations between safety cultures and nurse error reporting.^
Resumo:
Background. Over 39.9% of the adult population forty or older in the United States has refractive error, little is known about the etiology of this condition and associated risk factors and their entailed mechanism due to the paucity of data regarding the changes of refractive error for the adult population over time.^ Aim. To evaluate risk factors over a long term, 5-year period, in refractive error changes among persons 43 or older by testing the hypothesis that age, gender, systemic diseases, nuclear sclerosis and baseline refractive errors are all significantly associated with refractive errors changes in patients at a Dallas, Texas private optometric office.^ Methods. A retrospective chart review of subjective refraction, eye health, and self-report health history was done on patients at a private optometric office who were 43 or older in 2000 who had eye examinations both in 2000 and 2005. Aphakic and pseudophakic eyes were excluded as well as eyes with best corrected Snellen visual acuity of 20/40 and worse. After exclusions, refraction was obtained on 114 right eyes and 114 left eyes. Spherical equivalent (sum of sphere + ½ cylinder) was used as the measure of refractive error.^ Results. Similar changes in refractive error were observed for the two eyes. The 5-year change in spherical power was in a hyperopic direction for younger age groups and in a myopic direction for older subjects, P<0.0001. The gender-adjusted mean change in refractive error in right eyes of persons aged 43 to 54, 55 to 64, 65 to 74, and 75 or older at baseline was +0.43D, +0.46 D, -0.09 D, and -0.23D, respectively. Refractive change was strongly related to baseline nuclear cataract severity; grades 4 to 5 were associated with a myopic shift (-0.38 D, P< 0.0001). The mean age-adjusted change in refraction was +0.27 D for hyperopic eyes, +0.56 D for emmetropic eyes, and +0.26 D for myopic eyes.^ Conclusions. This report has documented refractive error changes in an older population and confirmed reported trends of a hyperopic shift before age 65 and a myopic shift thereafter associated with the development of nuclear cataract.^
Resumo:
Each year, hospitalized patients experience 1.5 million preventable injuries from medication errors and hospitals incur an additional $3.5 billion in cost (Aspden, Wolcott, Bootman, & Cronenwatt; (2007). It is believed that error reporting is one way to learn about factors contributing to medication errors. And yet, an estimated 50% of medication errors go unreported. This period of medication error pre-reporting, with few exceptions, is underexplored. The literature focuses on error prevention and management, but lacks a description of the period of introspection and inner struggle over whether to report an error and resulting likelihood to report. Reporting makes a nurse vulnerable to reprimand, legal liability, and even threat to licensure. For some nurses this state may invoke a disparity between a person‘s belief about him or herself as a healer and the undeniable fact of the error.^ This study explored the medication error reporting experience. Its purpose was to inform nurses, educators, organizational leaders, and policy-makers about the medication error pre-reporting period, and to contribute to a framework for further investigation. From a better understanding of factors that contribute to or detract from the likelihood of an individual to report an error, interventions can be identified to help the nurse come to a psychologically healthy resolution and help increase reporting of error in order to learn from error and reduce the possibility of future similar error.^ The research question was: "What factors contribute to a nurse's likelihood to report an error?" The specific aims of the study were to: (1) describe participant nurses' perceptions of medication error reporting; (2) describe participant explanations of the emotional, cognitive, and physical reactions to making a medication error; (3) identify pre-reporting conditions that make it less likely for a nurse to report a medication error; and (4) identify pre-reporting conditions that make it more likely for a nurse to report a medication error.^ A qualitative research study was conducted to explore the medication error experience and in particular the pre-reporting period from the perspective of the nurse. A total of 54 registered nurses from a large private free-standing not-for-profit children's hospital in the southwestern United States participated in group interviews. The results describe the experience of the nurse as well as the physical, emotional, and cognitive responses to the realization of the commission of a medication error. The results also reveal factors that make it more and less likely to report a medication error.^ It is clear from this study that upon realization that he or she has made a medication error, a nurse's foremost concern is for the safety of the patient. Fear was also described by each group of nurses. The nurses described a fear of several things including physician reaction, manager reaction, peer reaction, as well as family reaction and possible lack of trust as a result. Another universal response was the description of a struggle with guilt, shame, imperfection, blaming oneself, and questioning one's competence.^
Resumo:
In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^
Resumo:
Objective. In 2009, the International Expert Committee recommended the use of HbA1c test for diagnosis of diabetes. Although it has been recommended for the diagnosis of diabetes, its precise test performance among Mexican Americans is uncertain. A strong “gold standard” would rely on repeated blood glucose measurement on different days, which is the recommended method for diagnosing diabetes in clinical practice. Our objective was to assess test performance of HbA1c in detecting diabetes and pre-diabetes against repeated fasting blood glucose measurement for the Mexican American population living in United States-Mexico border. Moreover, we wanted to find out a specific and precise threshold value of HbA1c for Diabetes Mellitus (DM) and pre-diabetes for this high-risk population which might assist in better diagnosis and better management of patient diabetes. ^ Research design and methods. We used CCHC dataset for our study. In 2004, the Cameron County Hispanic Cohort (CCHC), now numbering 2,574, was established drawn from randomly selected households on the basis of 2000 Census tract data. The CCHC study randomly selected a subset of people (aged 18-64 years) in CCHC cohort households to determine the influence of SES on diabetes and obesity. Among the participants in Cohort-2000, 67.15% are female; all are Hispanic. ^ Individuals were defined as having diabetes mellitus (Fasting plasma glucose [FPG] ≥ 126 mg/dL or pre-diabetes (100 ≤ FPG < 126 mg/dL). HbA1c test performance was evaluated using receiver operator characteristic (ROC) curves. Moreover, change-point models were used to determine HbA1c thresholds compatible with FPG thresholds for diabetes and pre-diabetes. ^ Results. When assessing Fasting Plasma Glucose (FPG) is used to detect diabetes, the sensitivity and specificity of HbA1c≥ 6.5% was 75% and 87% respectively (area under the curve 0.895). Additionally, when assessing FPG to detect pre-diabetes, the sensitivity and specificity of HbA1c≥ 6.0% (ADA recommended threshold) was 18% and 90% respectively. The sensitivity and specificity of HbA1c≥ 5.7% (International Expert Committee recommended threshold) for detecting pre-diabetes was 31% and 78% respectively. ROC analyses suggest HbA1c as a sound predictor of diabetes mellitus (area under the curve 0.895) but a poorer predictor for pre-diabetes (area under the curve 0.632). ^ Conclusions. Our data support the current recommendations for use of HbA1c in the diagnosis of diabetes for the Mexican American population as it has shown reasonable sensitivity, specificity and accuracy against repeated FPG measures. However, use of HbA1c may be premature for detecting pre-diabetes in this specific population because of the poor sensitivity with FPG. It might be the case that HbA1c is differentiating the cases more effectively who are at risk of developing diabetes. Following these pre-diabetic individuals for a longer-term for the detection of incident diabetes may lead to more confirmatory result.^
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
Soybean aphid has been a major pest for producers in Northwest Iowa since their first major outbreak in 2003. Control measures for managing this pest are warranted almost every growing season and much research is being done on managing this pest. Insecticide applications have been the sole management technique for soybean aphid and will continue to be important in the future. An economic threshold of 250 aphids/plant is the current threshold level recommended by Iowa State University. This study was conducted to determine if the current recommendations are useful in managing soybean aphid and maintaining profitability for producers.
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
Fil: Fornero, Ricardo A.. Universidad Nacional de Cuyo. Facultad de Ciencias Económicas
Resumo:
We present a new mid-latitude speleothem record of millennial-scale climatic variability during OIS3 from the Villars Cave that, combined with former published contemporaneous samples from the same cave, gives a coherent image of the climate variability in SW-France between ~55 ka and ~30 ka. The 0.82 m long stalagmite Vil-stm27 was dated with 26 TIMS U-Th analyses and its growth curve displays variations that are linked with the stable isotopes, both controlled by the climatic conditions. It consists in a higher resolved replicate of the previously published Vil-stm9 and Vil-stm14 stalagmites where Dansgaard-Oeschger (DO) events have been observed. The good consistency between these three stalagmites and the comparison with other palaoeclimatic reconstructions, especially high resolution pollen records (ODP 976 from the Alboran Sea, Monticchio Lake record from southern Italy) and the nearby MD04-2845 Atlantic Ocean record, permits to draw a specific climatic pattern in SW-France during the OIS3 and to see regional differences between these sites. Main features of this period are: 1) warm events corresponding to Greenland Interstadials (GIS) that are characterized by low speleothem d13C, high temperate pollen percentages, warm temperatures and high humidity; among these events, GIS#12 is the most pronounced one at Villars characterized by an abrupt onset at ~46.6 ka and a duration of about 2.5 ka. The other well individualized warm event coincides with GIS#8 which is however much less pronounced and occurred during a cooler period as shown by a lower growth rate and a higher d13C; 2) cold events corresponding to Greenland Stadials (GS) that are clearly characterized by high speleothem d13C, low temperate pollen abundance, low temperature and enhanced dryness, particularly well expressed during GS coinciding with Heinrich events H5 and H4. The main feature of the Villars record is a general cooling trend between the DO#12 event ~45.5 ka and the synchronous stop of the three stalagmites at ~30 ka ±1, with a first well marked climatic threshold at ~41 ka after which the growth rate and the diameter of all stalagmites slows down significantly. This climatic evolution differs from that shown at southern Mediterranean sites where this trend is not observed. The ~30 ka age marks the second climatic threshold after which low temperatures and low rainfalls prevent speleothem growth in the Villars area until the Lateglacial warming that occurred at ~16.5 ± 0.5 ka. This 15 ka long hiatus, as the older Villars growth hiatus that occurred between 67.4 and 61 ka, are linked to low sea levels, reduced ocean circulation and a southward shift of the Polar Front that likely provoked local permafrost formation. These cold periods coincide with both low summer 65°N insolation, low atmospheric CO2 concentration and large ice sheets development (especially the Fennoscandian).
Resumo:
Esta ponencia continúa otra en la que analizamos la descripción del nuevo mundo y el funcionamiento de la analogía, a partir de estudios críticos referidos a los Diarios del Primer Viaje de Cristóbal Colón. En esta oportunidad se analizará la dificultad que plantea diferenciar el discurso de Colón en sus Diarios del discurso de Las Casas. En este sentido, la presente ponencia estudiará las intervenciones de Las Casas en el diario de Colón desde su posible inclusión en la episteme de la representación organizada por Michel Foucault en Las palabras y las cosas, en la que indica que en cada momento cultural solo una episteme otorgará las condiciones de posibilidad de todo conocimiento, condiciones que serán otras para una nueva disposición general de los saberes o episteme. Nuestro trabajo consistirá en establecer diferencias epistemológicas entre el discurso colombino, obtenido en dicho diario, y el discurso intercalado de Las Casas (en el mismo texto). Así entonces, desde esta perspectiva, podría considerarse el diálogo textual de los discursos de Colón y de Las Casas desde aquello que los hace posibles, es decir, desde configuraciones del saber (epistemológicas) profundamente diferentes.
Resumo:
La historia "canónica" de la ciencia es un relato anacrónico plagado de profundas dicotómías, sobredestacando los éxitos (descubrimientos, hallazgos, modelos teóricos triunfantes, hitos) y desestimando los fracasos. En la verdadera ciencia, hay discusión, debate y controversia constantes, alimentados por la dinámica propia de las comunidades disciplinares. En la enseñanza de la ciencia el análisis del "error" puede resultar mucho más interesante como constructo de la evolución del conocimiento, que su simple señalización como demarcación de teorías exitosas. Es igualmente valioso el estudio del fraude. Como la actividad científica depende fuertemente de la publicación, está por tanto condicionada por el discurso. La manipulación hábil de este discurso puede, en ocasiones, hacer especialmente difícil de identificar el artificio, el sesgo, el engaño. El enfoque conocido como "naturaleza de la ciencia" nos permite aprovechar estos elementos para comprender el funcionamiento interno e intrincado del ethos científico, y transmitir a los alumnos dimensiones controversiales de la ciencia como actividad social. La enseñanza de la ciencia puede sacar mucho provecho de estos dispositivos, que permiten segundas lecturas sobre hechos históricos. Traemos a consideración dos hechos científicos de principios del siglo XX, para examinar las complejas relaciones que una simple calificación de fraude o error impediría observar. Destacamos además el casi nulo tratamiento que tienen estos compromisos en los textos escolares de uso corriente. Realizamos sugerencias para que estos temas tengan inclusión en dispositivos didácticos con un enfoque epistemológico más actualizado, que revele el contexto y las tensiones a las que está sujeta la construcción del conocimiento
Resumo:
El presente trabajo vuelve a los vv. 358-361 del Cantar de Mio Cid sobre un tema que ha perturbado a la crítica: el texto conservado en el Códice de Vivar refiere que Jesús resucitó primero, y luego descendió a los Infiernos, lo cual implica una inversión del orden tradicional de los acontecimientos. En consecuencia, se revisan aquí las distintas opiniones sobre el particular, que en general pueden dividirse básicamente en dos grupos -aquellas que sostienen que el poeta cometió un error, y otras que afirman que el autor del poema adhirió a un determinado modelo, proveniente ya de la épica francesa, ya de la liturgia-, y se intenta arribar a una solución que considere más satisfactoriamente la especificidad del texto manuscrito.