973 resultados para Roundness errors
Resumo:
BACKGROUND: Multiple interventions were made to optimize the medication process in our intensive care unit (ICU). 1 Transcriptions from the medical order form to the administration plan were eliminated by merging both into a single document; 2 the new form was built in a logical sequence and was highly structured to promote completeness and standardization of information; 3 frequently used drug names, approved units, and fixed routes were pre-printed; 4 physicians and nurses were trained with regard to the correct use of the new form. This study was aimed at evaluating the impact of these interventions on clinically significant types of medication errors. METHODS: Eight types of medication errors were measured by a prospective chart review before and after the interventions in the ICU of a public tertiary care hospital. We used an interrupted time-series design to control the secular trends. RESULTS: Over 85 days, 9298 lines of drug prescription and/or administration to 294 patients, corresponding to 754 patient-days were collected and analysed for the three series before and three series following the intervention. Global error rate decreased from 4.95 to 2.14% (-56.8%, P < 0.001). CONCLUSIONS: The safety of the medication process in our ICU was improved by simple and inexpensive interventions. In addition to the optimization of the prescription writing process, the documentation of intravenous preparation, and the scheduling of administration, the elimination of the transcription in combination with the training of users contributed to reducing errors and carried an interesting potential to increase safety.
Resumo:
Expectations about the future are central for determination of current macroeconomic outcomes and the formulation of monetary policy. Recent literature has explored ways for supplementing the benchmark of rational expectations with explicit models of expectations formation that rely on econometric learning. Some apparently natural policy rules turn out to imply expectational instability of private agents’ learning. We use the standard New Keynesian model to illustrate this problem and survey the key results about interest-rate rules that deliver both uniqueness and stability of equilibrium under econometric learning. We then consider some practical concerns such as measurement errors in private expectations, observability of variables and learning of structural parameters required for policy. We also discuss some recent applications including policy design under perpetual learning, estimated models with learning, recurrent hyperinflations, and macroeconomic policy to combat liquidity traps and deflation.
Resumo:
This paper contributes to the on-going empirical debate regarding the role of the RBC model and in particular of technology shocks in explaining aggregate fluctuations. To this end we estimate the model’s posterior density using Markov-Chain Monte-Carlo (MCMC) methods. Within this framework we extend Ireland’s (2001, 2004) hybrid estimation approach to allow for a vector autoregressive moving average (VARMA) process to describe the movements and co-movements of the model’s errors not explained by the basic RBC model. The results of marginal likelihood ratio tests reveal that the more general model of the errors significantly improves the model’s fit relative to the VAR and AR alternatives. Moreover, despite setting the RBC model a more difficult task under the VARMA specification, our analysis, based on forecast error and spectral decompositions, suggests that the RBC model is still capable of explaining a significant fraction of the observed variation in macroeconomic aggregates in the post-war U.S. economy.
Resumo:
A mesura que la complexitat de les tasques dels agents mòbils va creixent, és més important que aquestes no perdin el treball realitzat. Hem de saber en tot moment que la execució s’està desenvolupant favorablement. Aquest projecte tracta d’explicar el procés d’elaboració d’un component de tolerància a fallades des de la seva idea inicial fins a la seva implementació. Analitzarem la situació i dissenyarem una solució. Procurarem que el nostre component emmascari la fallada d’un agent, detectant-la i posteriorment recuperant l’execució des d’on s’ha interromput. Tot això procurant seguir la metodologia de disseny d’agents mòbils per a plataformes lleugeres.
Resumo:
Este trabajo presenta un sistema para detectar y clasificar objetos binarios según la forma de éstos. En el primer paso del procedimiento, se aplica un filtrado para extraer el contorno del objeto. Con la información de los puntos de forma se obtiene un descriptor BSM con características altamente descriptivas, universales e invariantes. En la segunda fase del sistema se aprende y se clasifica la información del descriptor mediante Adaboost y Códigos Correctores de Errores. Se han usado bases de datos públicas, tanto en escala de grises como en color, para validar la implementación del sistema diseñado. Además, el sistema emplea una interfaz interactiva en la que diferentes métodos de procesamiento de imágenes pueden ser aplicados.
Resumo:
Spatial econometrics has been criticized by some economists because some model specifications have been driven by data-analytic considerations rather than having a firm foundation in economic theory. In particular this applies to the so-called W matrix, which is integral to the structure of endogenous and exogenous spatial lags, and to spatial error processes, and which are almost the sine qua non of spatial econometrics. Moreover it has been suggested that the significance of a spatially lagged dependent variable involving W may be misleading, since it may be simply picking up the effects of omitted spatially dependent variables, incorrectly suggesting the existence of a spillover mechanism. In this paper we review the theoretical and empirical rationale for network dependence and spatial externalities as embodied in spatially lagged variables, arguing that failing to acknowledge their presence at least leads to biased inference, can be a cause of inconsistent estimation, and leads to an incorrect understanding of true causal processes.
Resumo:
‘Modern’ Phillips curve theories predict inflation is an integrated, or near integrated, process. However, inflation appears bounded above and below in developed economies and so cannot be ‘truly’ integrated and more likely stationary around a shifting mean. If agents believe inflation is integrated as in the ‘modern’ theories then they are making systematic errors concerning the statistical process of inflation. An alternative theory of the Phillips curve is developed that is consistent with the ‘true’ statistical process of inflation. It is demonstrated that United States inflation data is consistent with the alternative theory but not with the existing ‘modern’ theories.
Resumo:
In this paper we investigate the ability of a number of different ordered probit models to predict ratings based on firm-specific data on business and financial risks. We investigate models based on momentum, drift and ageing and compare them against alternatives that take into account the initial rating of the firm and its previous actual rating. Using data on US bond issuing firms rated by Fitch over the years 2000 to 2007 we compare the performance of these models in predicting the rating in-sample and out-of-sample using root mean squared errors, Diebold-Mariano tests of forecast performance and contingency tables. We conclude that initial and previous states have a substantial influence on rating prediction.
Resumo:
Employing an endogenous growth model with human capital, this paper explores how productivity shocks in the goods and human capital producing sectors contribute to explaining aggregate fluctuations in output, consumption, investment and hours. Given the importance of accounting for both the dynamics and the trends in the data not captured by the theoretical growth model, we introduce a vector error correction model (VECM) of the measurement errors and estimate the model’s posterior density function using Bayesian methods. To contextualize our findings with those in the literature, we also assess whether the endogenous growth model or the standard real business cycle model better explains the observed variation in these aggregates. In addressing these issues we contribute to both the methods of analysis and the ongoing debate regarding the effects of innovations to productivity on macroeconomic activity.
Resumo:
Accurate chromosome segregation during mitosis is temporally and spatially coordinated by fidelity-monitoring checkpoint systems. Deficiencies in these checkpoint systems can lead to chromosome segregation errors and aneuploidy, and promote tumorigenesis. Here, we report that the TRAF-interacting protein (TRAIP), a ubiquitously expressed nucleolar E3 ubiquitin ligase important for cellular proliferation, is localized close to mitotic chromosomes. Its knockdown in HeLa cells by RNA interference (RNAi) decreased the time of early mitosis progression from nuclear envelope breakdown (NEB) to anaphase onset and increased the percentages of chromosome alignment defects in metaphase and lagging chromosomes in anaphase compared with those of control cells. The decrease in progression time was corrected by the expression of wild-type but not a ubiquitin-ligase-deficient form of TRAIP. TRAIP-depleted cells bypassed taxol-induced mitotic arrest and displayed significantly reduced kinetochore levels of MAD2 (also known as MAD2L1) but not of other spindle checkpoint proteins in the presence of nocodazole. These results imply that TRAIP regulates the spindle assembly checkpoint, MAD2 abundance at kinetochores and the accurate cellular distribution of chromosomes. The TRAIP ubiquitin ligase activity is functionally required for the spindle assembly checkpoint control.
Resumo:
We study the impact of anticipated fiscal policy changes in a Ramsey economy where agents form long-horizon expectations using adaptive learning. We extend the existing framework by introducing distortionary taxes as well as elastic labour supply, which makes agents. decisions non-predetermined but more realistic. We detect that the dynamic responses to anticipated tax changes under learning have oscillatory behaviour that can be interpreted as self-fulfilling waves of optimism and pessimism emerging from systematic forecast errors. Moreover, we demonstrate that these waves can have important implications for the welfare consequences of .scal reforms. (JEL: E32, E62, D84)
Resumo:
Los déficits y sesgos tanto cognitivos como afectivos han sido fuente creciente de interés en el ámbito de la Neurociéncia de los Trastornos Mentales. En este proyecto, que se inicia en 2004 y finaliza a finales de 2008, se han estudiado los siguientes Trastornos Mentales: Juego Patológico (JP), Trastornos de la Conducta Alimentaria (TCA) y Trastornos Depresivos. En esta memoria nos centraremos en resumir parte de los resultados obtenidos en un estudio sobre JP y toma de decisiones (articulo en revisión y pendiente de aceptación) y otro de funcionamiento ejecutivo en JP y Bulimia Nerviosa (BN) (artículo en prensa). Resumiento el primer estudio los JP (N=32) muestran un proceso de toma de decisiones sesgado por la búsqueda de recompensa en forma de elevada toma de riesgos en comparación con Controles Sanos (CS). También se observan déficits en flexibilidad cognitiva pero no en control inhibitorio entre JP y CS. Los resultados descartan miopía conductual para lo toma de decisiones en JP, pero apuntan a un sesgo cognitivo-afectivo, en el que el control de los impulsos jugaría un papel relevante, en forma de ilusión de control, para los procesos de toma de decisiones con recompensa inmediata pero con castigo diferido, medidos por una prueba de toma de decisiones (IGT ABCD). En el segundo estudio, basándose en las vulnerabilidadades compartidas descritas entre JP y BN se comparó el funcionamiento ejecutivo de mujeres con JP y BN. Tras la administración del WCST y Stroop y ajustando el análisis por edad y educación, las JP mostraron mayor afectación, en concreto mayor porcentaje de errores perservaritvos, menor nivel de respuestas conceptuales y mayor número de ensayos administrados, mientras que el grupo de BN mostró mayor porcentaje de errores no persevarativos. Ambas, mujeres JP y BN mostraron disfunción ejecutiva en relación a los CS pero con diferentes correlatos específcos.
Resumo:
Empirical researchers interested in how governance shapes various aspects of economic development frequently use the Worldwide Governance indicators (WGI). These variables come in the form of an estimate along with a standard error reflecting the uncertainty of this estimate. Existing empirical work simply uses the estimates as an explanatory variable and discards the information provided by the standard errors. In this paper, we argue that the appropriate practice should be to take into account the uncertainty around the WGI estimates through the use of multiple imputation. We investigate the importance of our proposed approach by revisiting in three applications the results of recently published studies. These applications cover the impact of governance on (i) capital flows; (ii) international trade; (iii) income levels around the world. We generally find that the estimated effects of governance are highly sensitive to the use of multiple imputation. We also show that model misspecification is a concern for the results of our reference studies. We conclude that the effects of governance are hard to establish once we take into account uncertainty around both the WGI estimates and the correct model specification.
Resumo:
Diffusion MRI is a well established imaging modality providing a powerful way to probe the structure of the white matter non-invasively. Despite its potential, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a large variety of methods have been recently proposed to shorten the acquisition times. Among them, spherical deconvolution approaches have gained a lot of interest for their ability to reliably recover the intra-voxel fiber configuration with a relatively small number of data samples. To overcome the intrinsic instabilities of deconvolution, these methods use regularization schemes generally based on the assumption that the fiber orientation distribution (FOD) to be recovered in each voxel is sparse. The well known Constrained Spherical Deconvolution (CSD) approach resorts to Tikhonov regularization, based on an ℓ(2)-norm prior, which promotes a weak version of sparsity. Also, in the last few years compressed sensing has been advocated to further accelerate the acquisitions and ℓ(1)-norm minimization is generally employed as a means to promote sparsity in the recovered FODs. In this paper, we provide evidence that the use of an ℓ(1)-norm prior to regularize this class of problems is somewhat inconsistent with the fact that the fiber compartments all sum up to unity. To overcome this ℓ(1) inconsistency while simultaneously exploiting sparsity more optimally than through an ℓ(2) prior, we reformulate the reconstruction problem as a constrained formulation between a data term and a sparsity prior consisting in an explicit bound on the ℓ(0)norm of the FOD, i.e. on the number of fibers. The method has been tested both on synthetic and real data. Experimental results show that the proposed ℓ(0) formulation significantly reduces modeling errors compared to the state-of-the-art ℓ(2) and ℓ(1) regularization approaches.
Resumo:
In this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. The first (which is of general interest beyond competition policy) is to clarify the concept of “legal uncertainty”, relating it to ideas in the literature on Law and Economics, but formalising the concept through various information structures which specify the probability that each firm attaches – at the time it takes an action – to the possibility of its being deemed anti-competitive were it to be investigated by a Competition Authority. We show that the existence of Type I and Type II decision errors by competition authorities is neither necessary nor sufficient for the existence of legal uncertainty, and that information structures with legal uncertainty can generate higher welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects- Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and also penalties were exogenously fixed. Here we allow for (a) different information structures under an Effects-Based standard and (b) endogenous penalties. We obtain two main results: (i) considering all information structures a Per Se standard is never better than an Effects-Based standard; (ii) optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty.