866 resultados para Medication Error
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
This thesis studies evaluation of software development practices through an error analysis. The work presents software development process, software testing, software errors, error classification and software process improvement methods. The practical part of the work presents results from the error analysis of one software process. It also gives improvement ideas for the project. It was noticed that the classification of the error data was inadequate in the project. Because of this it was impossible to use the error data effectively. With the error analysis we were able to show that there were deficiencies in design and analyzing phases, implementation phase and in testing phase. The work gives ideas for improving error classification and for software development practices.
Resumo:
En el sector suroriental de la Cuenca del Ebro, la inclinación paleomagnética obtenida en las sucesiones aluviales oligocenas es considerablemente menor que la esperable, si se considera la paleolatitud de referencia calculada para esa región durante el Oligoceno. Este error de inclinación puede deberse a diversos factores, como el control hidrodinámica de las partículas magnéticas en el medio deposicional, la compactación diferencial del sedimento durante el enterramiento, o bien a la deformación tectónica. Este trabajo se ha centrado en su estudio en dos sucesiones dominantemente aluviales, donde previamente se había establecido su magnetoestratigrafia. Las litofacies aluviales y lacustres estudiadas se han agrupado en cinco grupos: areniscas grises, areniscas rojas y versicolores, limos rojos, lutitas rojas y calizas. Se ha demostrado la existencia de una correlación entre la abundancia de filosilicatos y el error de inclinación. De esta manera, las litofacies con un bajo porcentaje de filosilicatos (calizas y areniscas grises) presentan errores de unos 5', estadisticarnente no significativos, con respecto a la inclinación de referencia. Por el contrario, en materiales con un porcentaje más elevado de filosilicatos (limos y arcillas) el error puede llegar a los 25'. Este hecho no tiene repercusión en la interpretación de las polaridades magnéticas, pero si en las reconstmcciones palinspásticas y paleogeográficas basadas en los cálculos de paleolatitudes a partir de las paleoinclinaciones. Los resultados obtenidos demuestran la necesidad de cautela en la propuesta de conclusiones basadas exclusivamente en este tipo de información.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials.
Resumo:
This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants" math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a nonnumerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.
Resumo:
Prescribing inappropriate medication (PIM) is a common public health problem. Mainly due to associated adverse drugs events (ADE), it results in major morbidity and mortality, as well as increased healthcare utilization. For a long time, the systematic review of medications prescribed appeared as a solution for limiting PIM and the ADE associated with such prescriptions. With this aim and since 2008, the list of STOPP-START criteria has appeared as attractive in its design, as well as logical and easy to use. The initial version has just been updated and improved. After having detailed all improvements provided to the 2008 version, we present the result of its adaptation into French language by a group of French-speaking expert from Belgium, Canada, France, and Switzerland.
Resumo:
BACKGROUND: Pregnant women with asthma need to take medication during pregnancy. OBJECTIVE: We sought to identify whether there is an increased risk of specific congenital anomalies after exposure to antiasthma medication in the first trimester of pregnancy. METHODS: We performed a population-based case-malformed control study testing signals identified in a literature review. Odds ratios (ORs) of exposure to the main groups of asthma medication were calculated for each of the 10 signal anomalies compared with registrations with nonchromosomal, nonsignal anomalies as control registrations. In addition, exploratory analyses were done for each nonsignal anomaly. The data set included 76,249 registrations of congenital anomalies from 13 EUROmediCAT registries. RESULTS: Cleft palate (OR, 1.63; 95% CI, 1.05-2.52) and gastroschisis (OR, 1.89; 95% CI, 1.12-3.20) had significantly increased odds of exposure to first-trimester use of inhaled β2-agonists compared with nonchromosomal control registrations. Odds of exposure to salbutamol were similar. Nonsignificant ORs of exposure to inhaled β2-agonists were found for spina bifida, cleft lip, anal atresia, severe congenital heart defects in general, or tetralogy of Fallot. None of the 4 literature signals of exposure to inhaled steroids were confirmed (cleft palate, cleft lip, anal atresia, and hypospadias). Exploratory analyses found an association between renal dysplasia and exposure to the combination of long-acting β2-agonists and inhaled corticosteroids (OR, 3.95; 95% CI, 1.99-7.85). CONCLUSIONS: The study confirmed increased odds of first-trimester exposure to inhaled β2-agonists for cleft palate and gastroschisis and found a potential new signal for renal dysplasia associated with combined long-acting β2-agonists and inhaled corticosteroids. Use of inhaled corticosteroids during the first trimester of pregnancy seems to be safe in relation to the risk for a range of specific major congenital anomalies.
Resumo:
BACKGROUND: Earlobe crease (ELC) has been associated with cardiovascular diseases (CVD) or risk factors (CVRF) and could be a marker predisposing to CVD. However, most studies studied only a small number of CVRF and no complete assessment of the associations between ELC and CVRF has been performed in a single study. METHODS: Population-based study (n = 4635, 46.7 % men) conducted between 2009 and 2012 in Lausanne, Switzerland. RESULTS: Eight hundred six participants (17.4 %) had an ELC. Presence of ELC was associated with male gender and older age. After adjusting for age and gender (and medication whenever necessary), presence of ELC was significantly (p < 0.05) associated with higher levels of body mass index (BMI) [adjusted mean ± standard error: 27.0 ± 0.2 vs. 26.02 ± 0.07 kg/m(2)], triglycerides [1.40 ± 0.03 vs. 1.36 ± 0.01 mmol/L] and insulin [8.8 ± 0.2 vs. 8.3 ± 0.1 μIU/mL]; lower levels of HDL cholesterol [1.61 ± 0.02 vs. 1.64 ± 0.01 mmol/L]; higher frequency of abdominal obesity [odds ratio and (95 % confidence interval) 1.20 (1.02; 1.42)]; hypertension [1.41 (1.18; 1.67)]; diabetes [1.43 (1.15; 1.79)]; high HOMA-IR [1.19 (1.00; 1.42)]; metabolic syndrome [1.28 (1.08; 1.51)] and history of CVD [1.55 (1.21; 1.98)]. No associations were found between ELC and estimated cardiovascular risk, inflammatory or liver markers. After further adjustment on BMI, only the associations between ELC and hypertension [1.30 (1.08; 1.56)] and history of CVD [1.47 (1.14; 1.89)] remained significant. For history of CVD, further adjustment on diabetes, hypertension, total cholesterol and smoking led to similar results [1.36 (1.05; 1.77)]. CONCLUSION: In this community-based sample ELC was significantly and independently associated with hypertension and history of CVD.
Resumo:
Adjusting behavior following the detection of inappropriate actions allows flexible adaptation to task demands and environmental contingencies during goal-directed behaviors. Post-error behavioral adjustments typically consist in adopting more cautious response mode, which manifests as a slowing down of response speed. Although converging evidence involves the dorsolateral prefrontal cortex (DLPFC) in post-error behavioral adjustment, whether and when the left or right DLPFC is critical for post-error slowing (PES), as well as the underlying brain mechanisms, remain highly debated. To resolve these issues, we used single-pulse transcranial magnetic stimulation in healthy human adults to disrupt the left or right DLPFC selectively at various delays within the 30-180ms interval following false alarms commission, while participants preformed a standard visual Go/NoGo task. PES significantly increased after TMS disruption of the right, but not the left DLPFC at 150ms post-FA response. We discuss these results in terms of an involvement of the right DLPFC in reducing the detrimental effects of error detection on subsequent behavioral performance, as opposed to implementing adaptative error-induced slowing down of response speed.
Resumo:
Los análisis de Fourier permiten caracterizar el contorno del diente y obtener una serie de parámetros para un posterior análisis multivariante. Sin embargo, la gran complejidad que presentan algunas formas obliga a determinar el error de medición intrínseco que se produce. El objetivo de este trabajo es aplicar y validar los análisis de Fourier en el estudio de la forma dental del segundo molar inferior (M2) de cuatro especies de primates Hominoidea para explorar la variabilidad morfométrica interespecífica, así como determinar el error de medición a un nivel intra e interobservador. El contorno de la superficie oclusal del diente fue definido digitalmente y con las funciones derivadas del análisis de Fourier se realizaron Análisis Discriminantes y Test de Mantel (correlaciones de Pearson) para determinar las diferencias de forma a partir de las mediciones tomadas. Los resultados indican que el análisis de Fourier muestra la variabilidad de forma en dientes molares en especies de primates Hominoidea. Adicionalmente, los altos niveles de correlación a nivel intra (r>0,9) como interobservador (r>0,7) sugieren que la descripción morfométrica del diente a partir de métodos de Fourier realizados por diferentes observadores puede ser agrupada para posteriores análisis.
Resumo:
The Community Pharmacy of the Department of Ambulatory Care and Community Medicine (Policlinique Médicale Universitaire, PMU), University of Lausanne, developed and implemented an interdisciplinary medication adherence program. The program aims to support and reinforce medication adherence through a multifactorial and interdisciplinary intervention. Motivational interviewing is combined with medication adherence electronic monitors (MEMS, Aardex MWV) and a report to patient, physician, nurse, and other pharmacists. This program has become a routine activity and was extended for use with all chronic diseases. From 2004 to 2014, there were 819 patient inclusions, and 268 patients were in follow-up in 2014. This paper aims to present the organization and program's context, statistical data, published research, and future perspectives.
Resumo:
Using event-related brain potentials, the time course of error detection and correction was studied in healthy human subjects. A feedforward model of error correction was used to predict the timing properties of the error and corrective movements. Analysis of the multichannel recordings focused on (1) the error-related negativity (ERN) seen immediately after errors in response- and stimulus-locked averages and (2) on the lateralized readiness potential (LRP) reflecting motor preparation. Comparison of the onset and time course of the ERN and LRP components showed that the signs of corrective activity preceded the ERN. Thus, error correction was implemented before or at least in parallel with the appearance of the ERN component. Also, the amplitude of the ERN component was increased for errors, followed by fast corrective movements. The results are compatible with recent views considering the ERN component as the output of an evaluative system engaged in monitoring motor conflict.