884 resultados para Prediction error method
Resumo:
The purpose of this study was to compare-using cephalometric analysis (McNamara, and Legan and Burstone)-prediction tracings performed using three different methods, that is, manual and using the Dentofacial Planner Plus and Dolphin Image computer programs, with postoperative outcomes. Pre- and postoperative (6 months after surgery) lateral cephalometric radiographs were selected from 25 long-faced patients treated with combined surgery. Prediction tracings were made with each method and compared cephalometrically with the postoperative results. This protocol was repeated once more for method error evaluation. Statistical analysis was made by ANOVA and the Tukey test. The results showed superior predictability when the manual method was applied (50% similarity to postoperative results), followed by Dentofacial Planner Plus (31.2%) and Dolphin Image (18.8%). The experimental condition suggests that the manual method provides greater accuracy, although the predictability of the digital methods proved quite satisfactory. © 2013 World Federation of Orthodontists.
Resumo:
Currently, there has been an increasing demand for operational and trustworthy digital data transmission and storage systems. This demand has been augmented by the appearance of large-scale, high-speed data networks for the exchange, processing and storage of digital information in the different spheres. In this paper, we explore a way to achieve this goal. For given positive integers n,r, we establish that corresponding to a binary cyclic code C0[n,n-r], there is a binary cyclic code C[(n+1)3k-1,(n+1)3k-1-3kr], where k is a nonnegative integer, which plays a role in enhancing code rate and error correction capability. In the given scheme, the new code C is in fact responsible to carry data transmitted by C0. Consequently, a codeword of the code C0 can be encoded by the generator matrix of C and therefore this arrangement for transferring data offers a safe and swift mode. © 2013 SBMAC - Sociedade Brasileira de Matemática Aplicada e Computacional.
Resumo:
Animal by-product meals have large variability in crude protein (CP) content and digestibility. In vivo digestibility procedures are precise but laborious, and in vitro methods could be an alternative to evaluate and classify these ingredients. The present study reports prediction equations to estimate the CP digestibility of meat and bone meal (MBM) and poultry by-product meal (PM) using the protein solubility in pepsin method (PSP). Total tract CP digestibility of eight MBM and eight PM samples was determined in dogs by the substitution method. A basal diet was formulated for dog maintenance, and sixteen diets were produced by mixing 70 % of the basal diet and 30 % of each tested meal. Six dogs per diet were used to determine ingredient digestibility. In addition, PSP of the MBM and PM samples was determined using three pepsin concentrations: 0·02, 0·002 and 0·0002 %. The CP content of MBM and PM ranged from 39 to 46 % and 57 to 69 %, respectively, and their mean CP digestibility by dogs was 76 (2·4) and 85 (2·6) %, respectively. The pepsin concentration with higher Pearson correlation coefficients with the in vivo results were 0·0002 % for MBM (r 0·380; P = 0·008) and 0·02 % for PM (r 0·482; P = 0·005). The relationship between the in vivo and in vitro results was better explained by the following equations: CP digestibility of MBM = 61·7 + 0·2644 × PSP at 0·0002 % (P = 0·008; R (2) 0·126); and CP digestibility of PM = 54·1 + 0·3833 × PSP at 0·02 % (P = 0·005; R (2) 0·216). Although significant, the coefficients of determination were low, indicating that the models were weak and need to be used with caution.
Resumo:
This work develops a computational approach for boundary and initial-value problems by using operational matrices, in order to run an evolutive process in a Hilbert space. Besides, upper bounds for errors in the solutions and in their derivatives can be estimated providing accuracy measures.
Resumo:
The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.
A Phase Space Box-counting based Method for Arrhythmia Prediction from Electrocardiogram Time Series
Resumo:
Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.
Resumo:
The chemotherapeutic drug 5-fluorouracil (5-FU) is widely used for treating solid tumors. Response to 5-FU treatment is variable with 10-30% of patients experiencing serious toxicity partly explained by reduced activity of dihydropyrimidine dehydrogenase (DPD). DPD converts endogenous uracil (U) into 5,6-dihydrouracil (UH(2) ), and analogously, 5-FU into 5-fluoro-5,6-dihydrouracil (5-FUH(2) ). Combined quantification of U and UH(2) with 5-FU and 5-FUH(2) may provide a pre-therapeutic assessment of DPD activity and further guide drug dosing during therapy. Here, we report the development of a liquid chromatography-tandem mass spectrometry assay for simultaneous quantification of U, UH(2) , 5-FU and 5-FUH(2) in human plasma. Samples were prepared by liquid-liquid extraction with 10:1 ethyl acetate-2-propanol (v/v). The evaporated samples were reconstituted in 0.1% formic acid and 10 μL aliquots were injected into the HPLC system. Analyte separation was achieved on an Atlantis dC(18) column with a mobile phase consisting of 1.0 mm ammonium acetate, 0.5 mm formic acid and 3.3% methanol. Positively ionized analytes were detected by multiple reaction monitoring. The analytical response was linear in the range 0.01-10 μm for U, 0.1-10 μm for UH(2) , 0.1-75 μm for 5-FU and 0.75-75 μm for 5-FUH(2) , covering the expected concentration ranges in plasma. The method was validated following the FDA guidelines and applied to clinical samples obtained from ten 5-FU-treated colorectal cancer patients. The present method merges the analysis of 5-FU pharmacokinetics and DPD activity into a single assay representing a valuable tool to improve the efficacy and safety of 5-FU-based chemotherapy.
Resumo:
The purpose of this study was (1) to determine frequency and type of medication errors (MEs), (2) to assess the number of MEs prevented by registered nurses, (3) to assess the consequences of ME for patients, and (4) to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses (n = 119) involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT) that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29%) indicated that there had been an ME. Registered nurses reported preventing 49 (5%) MEs. Overall, eight (2.8%) MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.
Resumo:
BACKGROUND After cardiac surgery with cardiopulmonary bypass (CPB), acquired coagulopathy often leads to post-CPB bleeding. Though multifactorial in origin, this coagulopathy is often aggravated by deficient fibrinogen levels. OBJECTIVE To assess whether laboratory and thrombelastometric testing on CPB can predict plasma fibrinogen immediately after CPB weaning. PATIENTS / METHODS This prospective study in 110 patients undergoing major cardiovascular surgery at risk of post-CPB bleeding compares fibrinogen level (Clauss method) and function (fibrin-specific thrombelastometry) in order to study the predictability of their course early after termination of CPB. Linear regression analysis and receiver operating characteristics were used to determine correlations and predictive accuracy. RESULTS Quantitative estimation of post-CPB Clauss fibrinogen from on-CPB fibrinogen was feasible with small bias (+0.19 g/l), but with poor precision and a percentage of error >30%. A clinically useful alternative approach was developed by using on-CPB A10 to predict a Clauss fibrinogen range of interest instead of a discrete level. An on-CPB A10 ≤10 mm identified patients with a post-CPB Clauss fibrinogen of ≤1.5 g/l with a sensitivity of 0.99 and a positive predictive value of 0.60; it also identified those without a post-CPB Clauss fibrinogen <2.0 g/l with a specificity of 0.83. CONCLUSIONS When measured on CPB prior to weaning, a FIBTEM A10 ≤10 mm is an early alert for post-CPB fibrinogen levels below or within the substitution range (1.5-2.0 g/l) recommended in case of post-CPB coagulopathic bleeding. This helps to minimize the delay to data-based hemostatic management after weaning from CPB.
Resumo:
This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.
Resumo:
En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.
Resumo:
Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.
Resumo:
GeneSplicer is a new, flexible system for detecting splice sites in the genomic DNA of various eukaryotes. The system has been tested successfully using DNA from two reference organisms: the model plant Arabidopsis thaliana and human. It was compared to six programs representing the leading splice site detectors for each of these species: NetPlantGene, NetGene2, HSPL, NNSplice, GENIO and SpliceView. In each case GeneSplicer performed comparably to the best alternative, in terms of both accuracy and computational efficiency.