973 resultados para Roundoff errors.
Resumo:
QUESTIONS UNDER STUDY AND PRINCIPLES: Estimating glomerular filtration rate (GFR) in hospitalised patients with chronic kidney disease (CKD) is important for drug prescription but it remains a difficult task. The purpose of this study was to investigate the reliability of selected algorithms based on serum creatinine, cystatin C and beta-trace protein to estimate GFR and the potential added advantage of measuring muscle mass by bioimpedance. In a prospective unselected group of patients hospitalised in a general internal medicine ward with CKD, GFR was evaluated using inulin clearance as the gold standard and the algorithms of Cockcroft, MDRD, Larsson (cystatin C), White (beta-trace) and MacDonald (creatinine and muscle mass by bioimpedance). 69 patients were included in the study. Median age (interquartile range) was 80 years (73-83); weight 74.7 kg (67.0-85.6), appendicular lean mass 19.1 kg (14.9-22.3), serum creatinine 126 μmol/l (100-149), cystatin C 1.45 mg/l (1.19-1.90), beta-trace protein 1.17 mg/l (0.99-1.53) and GFR measured by inulin 30.9 ml/min (22.0-43.3). The errors in the estimation of GFR and the area under the ROC curves (95% confidence interval) relative to inulin were respectively: Cockcroft 14.3 ml/min (5.55-23.2) and 0.68 (0.55-0.81), MDRD 16.3 ml/min (6.4-27.5) and 0.76 (0.64-0.87), Larsson 12.8 ml/min (4.50-25.3) and 0.82 (0.72-0.92), White 17.6 ml/min (11.5-31.5) and 0.75 (0.63-0.87), MacDonald 32.2 ml/min (13.9-45.4) and 0.65 (0.52-0.78). Currently used algorithms overestimate GFR in hospitalised patients with CKD. As a consequence eGFR targeted prescriptions of renal-cleared drugs, might expose patients to overdosing. The best results were obtained with the Larsson algorithm. The determination of muscle mass by bioimpedance did not provide significant contributions.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
High-field (>or=3 T) cardiac MRI is challenged by inhomogeneities of both the static magnetic field (B(0)) and the transmit radiofrequency field (B(1)+). The inhomogeneous B fields not only demand improved shimming methods but also impede the correct determination of the zero-order terms, i.e., the local resonance frequency f(0) and the radiofrequency power to generate the intended local B(1)+ field. In this work, dual echo time B(0)-map and dual flip angle B(1)+-map acquisition methods are combined to acquire multislice B(0)- and B(1)+-maps simultaneously covering the entire heart in a single breath hold of 18 heartbeats. A previously proposed excitation pulse shape dependent slice profile correction is tested and applied to reduce systematic errors of the multislice B(1)+-map. Localized higher-order shim correction values including the zero-order terms for frequency f(0) and radiofrequency power can be determined based on the acquired B(0)- and B(1)+-maps. This method has been tested in 7 healthy adult human subjects at 3 T and improved the B(0) field homogeneity (standard deviation) from 60 Hz to 35 Hz and the average B(1)+ field from 77% to 100% of the desired B(1)+ field when compared to more commonly used preparation methods.
Resumo:
Report for the scientific sojourn carried out at the University of California at Berkeley, from September to December 2007. Environmental niche modelling (ENM) techniques are powerful tools to predict species potential distributions. In the last ten years, a plethora of novel methodological approaches and modelling techniques have been developed. During three months, I stayed at the University of California, Berkeley, working under the supervision of Dr. David R. Vieites. The aim of our work was to quantify the error committed by these techniques, but also to test how an increase in the sample size affects the resultant predictions. Using MaxEnt software we generated distribution predictive maps, from different sample sizes, of the Eurasian quail (Coturnix coturnix) in the Iberian Peninsula. The quail is a generalist species from a climatic point of view, but an habitat specialist. The resultant distribution maps were compared with the real distribution of the species. This distribution was obtained from recent bird atlases from Spain and Portugal. Results show that ENM techniques can have important errors when predicting the species distribution of generalist species. Moreover, an increase of sample size is not necessary related with a better performance of the models. We conclude that a deep knowledge of the species’ biology and the variables affecting their distribution is crucial for an optimal modelling. The lack of this knowledge can induce to wrong conclusions.
Resumo:
La empresa Eatout quiere crear un Cuadro de Mando desde el que llevar el control de ventas de los restaurantes. Actualmente se comprueba qué locales faltan por cerrar la jornada de ventas y se lleva un registro de los errores desde una hoja de cálculo. Con este proyecto se pretende agilizar y facilitar la gestión de ventas, y analizar las posibles causas de esas faltas. Para ello, se creará una aplicación en .NET desde la que gestionarán los cierres que falten por realizar en una jornada indicando cuál ha sido el motivo. Después se analizarán estos datos a través de la herramienta de Business Objects de SAP creando un Cuadro de Mando.
Resumo:
La tolerancia a fallos es una línea de investigación que ha adquirido una importancia relevante con el aumento de la capacidad de cómputo de los súper-computadores actuales. Esto es debido a que con el aumento del poder de procesamiento viene un aumento en la cantidad de componentes que trae consigo una mayor cantidad de fallos. Las estrategias de tolerancia a fallos actuales en su mayoría son centralizadas y estas no escalan cuando se utiliza una gran cantidad de procesos, dado que se requiere sincronización entre todos ellos para realizar las tareas de tolerancia a fallos. Además la necesidad de mantener las prestaciones en programas paralelos es crucial, tanto en presencia como en ausencia de fallos. Teniendo en cuenta lo citado, este trabajo se ha centrado en una arquitectura tolerante a fallos descentralizada (RADIC – Redundant Array of Distributed and Independant Controllers) que busca mantener las prestaciones iniciales y garantizar la menor sobrecarga posible para reconfigurar el sistema en caso de fallos. La implementación de esta arquitectura se ha llevado a cabo en la librería de paso de mensajes denominada Open MPI, la misma es actualmente una de las más utilizadas en el mundo científico para la ejecución de programas paralelos que utilizan una plataforma de paso de mensajes. Las pruebas iniciales demuestran que el sistema introduce mínima sobrecarga para llevar a cabo las tareas correspondientes a la tolerancia a fallos. MPI es un estándar por defecto fail-stop, y en determinadas implementaciones que añaden cierto nivel de tolerancia, las estrategias más utilizadas son coordinadas. En RADIC cuando ocurre un fallo el proceso se recupera en otro nodo volviendo a un estado anterior que ha sido almacenado previamente mediante la utilización de checkpoints no coordinados y la relectura de mensajes desde el log de eventos. Durante la recuperación, las comunicaciones con el proceso en cuestión deben ser retrasadas y redirigidas hacia la nueva ubicación del proceso. Restaurar procesos en un lugar donde ya existen procesos sobrecarga la ejecución disminuyendo las prestaciones, por lo cual en este trabajo se propone la utilización de nodos spare para la recuperar en ellos a los procesos que fallan, evitando de esta forma la sobrecarga en nodos que ya tienen trabajo. En este trabajo se muestra un diseño propuesto para gestionar de un modo automático y descentralizado la recuperación en nodos spare en un entorno Open MPI y se presenta un análisis del impacto en las prestaciones que tiene este diseño. Resultados iniciales muestran una degradación significativa cuando a lo largo de la ejecución ocurren varios fallos y no se utilizan spares y sin embargo utilizándolos se restablece la configuración inicial y se mantienen las prestaciones.
Resumo:
RATIONALE AND OBJECTIVES: To determine optimum spatial resolution when imaging peripheral arteries with magnetic resonance angiography (MRA). MATERIALS AND METHODS: Eight vessel diameters ranging from 1.0 to 8.0 mm were simulated in a vascular phantom. A total of 40 three-dimensional flash MRA sequences were acquired with incremental variations of fields of view, matrix size, and slice thickness. The accurately known eight diameters were combined pairwise to generate 22 "exact" degrees of stenosis ranging from 42% to 87%. Then, the diameters were measured in the MRA images by three independent observers and with quantitative angiography (QA) software and used to compute the degrees of stenosis corresponding to the 22 "exact" ones. The accuracy and reproducibility of vessel diameter measurements and stenosis calculations were assessed for vessel size ranging from 6 to 8 mm (iliac artery), 4 to 5 mm (femoro-popliteal arteries), and 1 to 3 mm (infrapopliteal arteries). Maximum pixel dimension and slice thickness to obtain a mean error in stenosis evaluation of less than 10% were determined by linear regression analysis. RESULTS: Mean errors on stenosis quantification were 8.8% +/- 6.3% for 6- to 8-mm vessels, 15.5% +/- 8.2% for 4- to 5-mm vessels, and 18.9% +/- 7.5% for 1- to 3-mm vessels. Mean errors on stenosis calculation were 12.3% +/- 8.2% for observers and 11.4% +/- 15.1% for QA software (P = .0342). To evaluate stenosis with a mean error of less than 10%, maximum pixel surface, the pixel size in the phase direction, and the slice thickness should be less than 1.56 mm2, 1.34 mm, 1.70 mm, respectively (voxel size 2.65 mm3) for 6- to 8-mm vessels; 1.31 mm2, 1.10 mm, 1.34 mm (voxel size 1.76 mm3), for 4- to 5-mm vessels; and 1.17 mm2, 0.90 mm, 0.9 mm (voxel size 1.05 mm3) for 1- to 3-mm vessels. CONCLUSION: Higher spatial resolution than currently used should be selected for imaging peripheral vessels.
Resumo:
OBJECTIVE: To assess whether formatting the medical order sheet has an effect on the accuracy and security of antibiotics prescription. DESIGN: Prospective assessment of antibiotics prescription over time, before and after the intervention, in comparison with a control ward. SETTING: The medical and surgical intensive care unit (ICU) of a university hospital. PATIENTS: All patients hospitalized in the medical or surgical ICU between February 1 and April 30, 1997, and July 1 and August 31, 2000, for whom antibiotics were prescribed. INTERVENTION: Formatting of the medical order sheet in the surgical ICU in 1998. MEASUREMENTS AND MAIN RESULTS: Compliance with the American Society of Hospital Pharmacists' criteria for prescription safety was measured. The proportion of safe orders increased in both units, but the increase was 4.6 times greater in the surgical ICU (66% vs. 74% in the medical ICU and 48% vs. 74% in the surgical ICU). For unsafe orders, the proportion of ambiguous orders decreased by half in the medical ICU (9% vs. 17%) and nearly disappeared in the surgical ICU (1% vs. 30%). The only missing criterion remaining in the surgical ICU was the drug dose unit, which could not be preformatted. The aim of antibiotics prescription (either prophylactic or therapeutic) was indicated only in 51% of the order sheets. CONCLUSIONS: Formatting of the order sheet markedly increased security of antibiotics prescription. These findings must be confirmed in other settings and with different drug classes. Formatting the medical order sheet decreases the potential for prescribing errors before full computerized prescription is available.
Resumo:
n this paper the iterative MSFV method is extended to include the sequential implicit simulation of time dependent problems involving the solution of a system of pressure-saturation equations. To control numerical errors in simulation results, an error estimate, based on the residual of the MSFV approximate pressure field, is introduced. In the initial time steps in simulation iterations are employed until a specified accuracy in pressure is achieved. This initial solution is then used to improve the localization assumption at later time steps. Additional iterations in pressure solution are employed only when the pressure residual becomes larger than a specified threshold value. Efficiency of the strategy and the error control criteria are numerically investigated. This paper also shows that it is possible to derive an a-priori estimate and control based on the allowed pressure-equation residual to guarantee the desired accuracy in saturation calculation.
Resumo:
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.
Resumo:
Glucose transporter-1 deficiency syndrome is caused by mutations in the SLC2A1 gene in the majority of patients and results in impaired glucose transport into the brain. From 2004-2008, 132 requests for mutational analysis of the SLC2A1 gene were studied by automated Sanger sequencing and multiplex ligation-dependent probe amplification. Mutations in the SLC2A1 gene were detected in 54 patients (41%) and subsequently in three clinically affected family members. In these 57 patients we identified 49 different mutations, including six multiple exon deletions, six known mutations and 37 novel mutations (13 missense, five nonsense, 13 frame shift, four splice site and two translation initiation mutations). Clinical data were retrospectively collected from referring physicians by means of a questionnaire. Three different phenotypes were recognized: (i) the classical phenotype (84%), subdivided into early-onset (<2 years) (65%) and late-onset (18%); (ii) a non-classical phenotype, with mental retardation and movement disorder, without epilepsy (15%); and (iii) one adult case of glucose transporter-1 deficiency syndrome with minimal symptoms. Recognizing glucose transporter-1 deficiency syndrome is important, since a ketogenic diet was effective in most of the patients with epilepsy (86%) and also reduced movement disorders in 48% of the patients with a classical phenotype and 71% of the patients with a non-classical phenotype. The average delay in diagnosing classical glucose transporter-1 deficiency syndrome was 6.6 years (range 1 month-16 years). Cerebrospinal fluid glucose was below 2.5 mmol/l (range 0.9-2.4 mmol/l) in all patients and cerebrospinal fluid : blood glucose ratio was below 0.50 in all but one patient (range 0.19-0.52). Cerebrospinal fluid lactate was low to normal in all patients. Our relatively large series of 57 patients with glucose transporter-1 deficiency syndrome allowed us to identify correlations between genotype, phenotype and biochemical data. Type of mutation was related to the severity of mental retardation and the presence of complex movement disorders. Cerebrospinal fluid : blood glucose ratio was related to type of mutation and phenotype. In conclusion, a substantial number of the patients with glucose transporter-1 deficiency syndrome do not have epilepsy. Our study demonstrates that a lumbar puncture provides the diagnostic clue to glucose transporter-1 deficiency syndrome and can thereby dramatically reduce diagnostic delay to allow early start of the ketogenic diet.
Resumo:
Almost all individuals (182) belonging to an Amazonian riverine population (Portuchuelo, RO, Brazil) were investigated for ascertaining data on epidemiological aspects of malaria. Thirteen genetic blood polymorphisms were investigated (ABO, MNSs, Rh, Kell, and Duffy systems, haptoglobins, hemoglobins, and the enzymes glucose-6-phosphate dehydrogenase, glyoxalase, phosphoglucomutase, carbonic anhydrase, red cell acid phosphatase, and esterase D). The results indicated that the Duffy system is associated with susceptibility to malaria, as observed in other endemic areas. Moreover, suggestions also arose indicating that the EsD and Rh loci may be significantly associated with resistance to malaria. If statistical type II errors and sample stratification could be ruled out, hypotheses on the existence of a causal mechanism or an unknown closely linked locus involved in susceptibility to malaria infection may explain the present findings.
Resumo:
Background: The imatinib trough plasma concentration (C(min)) correlates with clinical response in cancer patients. Therapeutic drug monitoring (TDM) of plasma C(min) is therefore suggested. In practice, however, blood sampling for TDM is often not performed at trough. The corresponding measurement is thus only remotely informative about C(min) exposure. Objectives: The objectives of this study were to improve the interpretation of randomly measured concentrations by using a Bayesian approach for the prediction of C(min), incorporating correlation between pharmacokinetic parameters, and to compare the predictive performance of this method with alternative approaches, by comparing predictions with actual measured trough levels, and with predictions obtained by a reference method, respectively. Methods: A Bayesian maximum a posteriori (MAP) estimation method accounting for correlation (MAP-ρ) between pharmacokinetic parameters was developed on the basis of a population pharmacokinetic model, which was validated on external data. Thirty-one paired random and trough levels, observed in gastrointestinal stromal tumour patients, were then used for the evaluation of the Bayesian MAP-ρ method: individual C(min) predictions, derived from single random observations, were compared with actual measured trough levels for assessment of predictive performance (accuracy and precision). The method was also compared with alternative approaches: classical Bayesian MAP estimation assuming uncorrelated pharmacokinetic parameters, linear extrapolation along the typical elimination constant of imatinib, and non-linear mixed-effects modelling (NONMEM) first-order conditional estimation (FOCE) with interaction. Predictions of all methods were finally compared with 'best-possible' predictions obtained by a reference method (NONMEM FOCE, using both random and trough observations for individual C(min) prediction). Results: The developed Bayesian MAP-ρ method accounting for correlation between pharmacokinetic parameters allowed non-biased prediction of imatinib C(min) with a precision of ±30.7%. This predictive performance was similar for the alternative methods that were applied. The range of relative prediction errors was, however, smallest for the Bayesian MAP-ρ method and largest for the linear extrapolation method. When compared with the reference method, predictive performance was comparable for all methods. The time interval between random and trough sampling did not influence the precision of Bayesian MAP-ρ predictions. Conclusion: Clinical interpretation of randomly measured imatinib plasma concentrations can be assisted by Bayesian TDM. Classical Bayesian MAP estimation can be applied even without consideration of the correlation between pharmacokinetic parameters. Individual C(min) predictions are expected to vary less through Bayesian TDM than linear extrapolation. Bayesian TDM could be developed in the future for other targeted anticancer drugs and for the prediction of other pharmacokinetic parameters that have been correlated with clinical outcomes.
Resumo:
OBJECTIVE: To assess the change in non-compliant items in prescription orders following the implementation of a computerized physician order entry (CPOE) system named PreDiMed. SETTING: The department of internal medicine (39 and 38 beds) in two regional hospitals in Canton Vaud, Switzerland. METHOD: The prescription lines in 100 pre- and 100 post-implementation patients' files were classified according to three modes of administration (medicines for oral or other non-parenteral uses; medicines administered parenterally or via nasogastric tube; pro re nata (PRN), as needed) and analyzed for a number of relevant variables constitutive of medical prescriptions. MAIN OUTCOME MEASURE: The monitored variables depended on the pharmaceutical category and included mainly name of medicine, pharmaceutical form, posology and route of administration, diluting solution, flow rate and identification of prescriber. RESULTS: In 2,099 prescription lines, the total number of non-compliant items was 2,265 before CPOE implementation, or 1.079 non-compliant items per line. Two-thirds of these were due to missing information, and the remaining third to incomplete information. In 2,074 prescription lines post-CPOE implementation, the number of non-compliant items had decreased to 221, or 0.107 non-compliant item per line, a dramatic 10-fold decrease (chi(2) = 4615; P < 10(-6)). Limitations of the computerized system were the risk for erroneous items in some non-prefilled fields and ambiguity due to a field with doses shown on commercial products. CONCLUSION: The deployment of PreDiMed in two departments of internal medicine has led to a major improvement in formal aspects of physicians' prescriptions. Some limitations of the first version of PreDiMed were unveiled and are being corrected.
Resumo:
Catherine Comiskey CI and Hypothesis tests part 2 Hypothesis Testing  - Developing Null and Alternative Hypotheses  - Type I and Type II Errors  - Population Mean:  s Known  - Population Mean:  s Unknown  - Population Proportion Â