914 resultados para Error Probability
Resumo:
Each year, hospitalized patients experience 1.5 million preventable injuries from medication errors and hospitals incur an additional $3.5 billion in cost (Aspden, Wolcott, Bootman, & Cronenwatt; (2007). It is believed that error reporting is one way to learn about factors contributing to medication errors. And yet, an estimated 50% of medication errors go unreported. This period of medication error pre-reporting, with few exceptions, is underexplored. The literature focuses on error prevention and management, but lacks a description of the period of introspection and inner struggle over whether to report an error and resulting likelihood to report. Reporting makes a nurse vulnerable to reprimand, legal liability, and even threat to licensure. For some nurses this state may invoke a disparity between a person‘s belief about him or herself as a healer and the undeniable fact of the error.^ This study explored the medication error reporting experience. Its purpose was to inform nurses, educators, organizational leaders, and policy-makers about the medication error pre-reporting period, and to contribute to a framework for further investigation. From a better understanding of factors that contribute to or detract from the likelihood of an individual to report an error, interventions can be identified to help the nurse come to a psychologically healthy resolution and help increase reporting of error in order to learn from error and reduce the possibility of future similar error.^ The research question was: "What factors contribute to a nurse's likelihood to report an error?" The specific aims of the study were to: (1) describe participant nurses' perceptions of medication error reporting; (2) describe participant explanations of the emotional, cognitive, and physical reactions to making a medication error; (3) identify pre-reporting conditions that make it less likely for a nurse to report a medication error; and (4) identify pre-reporting conditions that make it more likely for a nurse to report a medication error.^ A qualitative research study was conducted to explore the medication error experience and in particular the pre-reporting period from the perspective of the nurse. A total of 54 registered nurses from a large private free-standing not-for-profit children's hospital in the southwestern United States participated in group interviews. The results describe the experience of the nurse as well as the physical, emotional, and cognitive responses to the realization of the commission of a medication error. The results also reveal factors that make it more and less likely to report a medication error.^ It is clear from this study that upon realization that he or she has made a medication error, a nurse's foremost concern is for the safety of the patient. Fear was also described by each group of nurses. The nurses described a fear of several things including physician reaction, manager reaction, peer reaction, as well as family reaction and possible lack of trust as a result. Another universal response was the description of a struggle with guilt, shame, imperfection, blaming oneself, and questioning one's competence.^
Resumo:
In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^
Resumo:
Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^
Resumo:
Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
Four models of fission track annealing in apatite are compared with measured fission track lengths in samples from Site 800 in the East Mariana Basin, Ocean Drilling Program Leg 129, given an independently determined temperature history. The temperature history of Site 800 was calculated using a one-dimensional, compactive, conductive heat flow model assuming two end-member thermal cases: one for cooling of Jurassic ocean crust that has experienced no subsequent heating, and one for cooling of Cretaceous ocean crust. Because the samples analyzed were only shallowly buried and because the tectonic history of the area since sample deposition is simple, resolution of the temperature history is high. The maximum temperature experienced by the sampled bed is between 16°-21°C and occurs at 96 Ma; temperatures since the Cretaceous have dropped in spite of continued pelagic sediment deposition because heat flow has continued to decay exponentially and bottom-water temperatures have dropped. Fission tracks observed within apatite grains from the sampled bed are 14.6 +/- 0.1 µm (1 sigma) long. Given the proposed temperature history of the samples, one unpublished and three published models of fission track annealing predict mean track lengths from 14.8 to 15.9 µm. These models require temperatures as much as 40°C higher than the calculated paleotemperature maximum of the sampled bed to produce the same degree of track annealing. Measured and predicted values are different because annealing models are based on extrapolation of high temperature laboratory data to geologic times. The model that makes the closest prediction is based on the greatest number of experiments performed at low temperature and on an apatite having composition closest to that of the core samples.
Resumo:
reduce costs and labor associated with predicting the genotypic mean (GM) of a synthetic variety (SV) of maize (Zea mays L.), breeders can develop SVs from L lines and s single crosses (SynL,SC) instead of L+2s lines (SynL). The objective of this work was to derive and study formulae for the inbreeding coefficient (IC) and GM of SynL,SC, SynL, and the SV derived from (L+2s)/2 single crosses (SynSC). All SVs were derived from the same L+2s unrelated lines whose IC is FL, and each parent of a SV was represented by m plants. An a priori probability equation for the IC was used. Important results were: 1) the largest and smallest GMs correspond to SynL and SynL,SC, respectively; 2) the GM predictors with the largest and intermediate precision are those for SynL and SynL,SC, respectively; 3) only when FL=1, or m is large, SynL and SynSC are the same population, but only with SynSC prediction costs and labor undergo the maximum decrease, although its prediction precision is the lowest. To determine the SV to be developed, breeders should also consider the availability of lines, single crosses, manpower and land area; besides budget, target farmers, target environments, etc.
Resumo:
Fil: Fornero, Ricardo A.. Universidad Nacional de Cuyo. Facultad de Ciencias Económicas
Resumo:
Much progress has been made in estimating recurrence intervals of great and giant subduction earthquakes using terrestrial, lacustrine, and marine paleoseismic archives. Recent detailed records suggest these earthquakes may have variable recurrence periods and magnitudes forming supercycles. Understanding seismic supercycles requires long paleoseismic archives that record timing and magnitude of such events. Turbidite paleoseismic archives may potentially extend past earthquake records to the Pleistocene and can thus complement commonly shorter-term terrestrial archives. However, in order to unambiguously establish recurring seismicity as a trigger mechanism for turbidity currents, synchronous deposition of turbidites in widely spaced, isolated depocenters has to be ascertained. Furthermore, characteristics that predispose a seismically active continental margin to turbidite paleoseismology and the correct sample site selection have to be taken into account. Here we analyze 8 marine sediment cores along 950 km of the Chile margin to test for the feasibility of compiling detailed and continuous paleoseismic records based on turbidites. Our results suggest that the deposition of areally widespread, synchronous turbidites triggered by seismicity is largely controlled by sediment supply and, hence, the climatic and geomorphic conditions of the adjacent subaerial setting. The feasibility of compiling a turbidite paleoseismic record depends on the delicate balance between sufficient sediment supply providing material to fail frequently during seismic shaking and sufficiently low sedimentation rates to allow for coeval accumulation of planktonic foraminifera for high-resolution radiocarbon dating. We conclude that offshore northern central Chile (29-32.5°S) Holocene turbidite paleoseismology is not feasible, because sediment supply from the semi-arid mainland is low and almost no Holocene turbidity-current deposits are found in the cores. In contrast, in the humid region between 36 and 38°S frequent Holocene turbidite deposition may generally correspond to paleoseismic events. However, high terrigenous sedimentation rates prevent high-resolution radiocarbon dating. The climatic transition region between 32.5 and 36°S appears to be best suited for turbidite paleoseismology.
Resumo:
Esta ponencia continúa otra en la que analizamos la descripción del nuevo mundo y el funcionamiento de la analogía, a partir de estudios críticos referidos a los Diarios del Primer Viaje de Cristóbal Colón. En esta oportunidad se analizará la dificultad que plantea diferenciar el discurso de Colón en sus Diarios del discurso de Las Casas. En este sentido, la presente ponencia estudiará las intervenciones de Las Casas en el diario de Colón desde su posible inclusión en la episteme de la representación organizada por Michel Foucault en Las palabras y las cosas, en la que indica que en cada momento cultural solo una episteme otorgará las condiciones de posibilidad de todo conocimiento, condiciones que serán otras para una nueva disposición general de los saberes o episteme. Nuestro trabajo consistirá en establecer diferencias epistemológicas entre el discurso colombino, obtenido en dicho diario, y el discurso intercalado de Las Casas (en el mismo texto). Así entonces, desde esta perspectiva, podría considerarse el diálogo textual de los discursos de Colón y de Las Casas desde aquello que los hace posibles, es decir, desde configuraciones del saber (epistemológicas) profundamente diferentes.
Resumo:
La historia "canónica" de la ciencia es un relato anacrónico plagado de profundas dicotómías, sobredestacando los éxitos (descubrimientos, hallazgos, modelos teóricos triunfantes, hitos) y desestimando los fracasos. En la verdadera ciencia, hay discusión, debate y controversia constantes, alimentados por la dinámica propia de las comunidades disciplinares. En la enseñanza de la ciencia el análisis del "error" puede resultar mucho más interesante como constructo de la evolución del conocimiento, que su simple señalización como demarcación de teorías exitosas. Es igualmente valioso el estudio del fraude. Como la actividad científica depende fuertemente de la publicación, está por tanto condicionada por el discurso. La manipulación hábil de este discurso puede, en ocasiones, hacer especialmente difícil de identificar el artificio, el sesgo, el engaño. El enfoque conocido como "naturaleza de la ciencia" nos permite aprovechar estos elementos para comprender el funcionamiento interno e intrincado del ethos científico, y transmitir a los alumnos dimensiones controversiales de la ciencia como actividad social. La enseñanza de la ciencia puede sacar mucho provecho de estos dispositivos, que permiten segundas lecturas sobre hechos históricos. Traemos a consideración dos hechos científicos de principios del siglo XX, para examinar las complejas relaciones que una simple calificación de fraude o error impediría observar. Destacamos además el casi nulo tratamiento que tienen estos compromisos en los textos escolares de uso corriente. Realizamos sugerencias para que estos temas tengan inclusión en dispositivos didácticos con un enfoque epistemológico más actualizado, que revele el contexto y las tensiones a las que está sujeta la construcción del conocimiento
Resumo:
El presente trabajo vuelve a los vv. 358-361 del Cantar de Mio Cid sobre un tema que ha perturbado a la crítica: el texto conservado en el Códice de Vivar refiere que Jesús resucitó primero, y luego descendió a los Infiernos, lo cual implica una inversión del orden tradicional de los acontecimientos. En consecuencia, se revisan aquí las distintas opiniones sobre el particular, que en general pueden dividirse básicamente en dos grupos -aquellas que sostienen que el poeta cometió un error, y otras que afirman que el autor del poema adhirió a un determinado modelo, proveniente ya de la épica francesa, ya de la liturgia-, y se intenta arribar a una solución que considere más satisfactoriamente la especificidad del texto manuscrito.
Resumo:
El siguiente artículo tiene por objetivo dar a conocer el debate vigente en la sociedad brasileña sobre la noción de error en la enseñanza de portugués. Como la concepción normativa de lengua como estructura gramatical abstracta, formal y con ejemplos descontextualizados o extraídos de los clásicos de la literatura entra en confrontación, a partir de la segunda mitad del siglo XX, con las teorías lingüísticas y sus metodologías que pasan a estudiar la lengua más allá del sistema abstracto y formal descripto por la gramática tradicional. Por otro lado, reflexionar sobre la importancia de trabajar esta realidad con los alumnos del profesorado en portugués, ya que es muy importante que el futuro profesor reconozca la variación lingüística, asuma que la enseñanza de lengua no está exclusivamente asociada a la gramática tradicional y también que incorpore los conceptos de adecuación e inadecuación al evaluar la producción escrita y oral de sus futuros alumnos