929 resultados para PREDICTIVE PERFORMANCE
Resumo:
El objetivo de esta tesis es predecir el rendimiento de los estudiantes de doctorado en la Universidad de Girona según características personales (background), actitudinales y de redes sociales de los estudiantes. La población estudiada son estudiantes de tercer y cuarto curso de doctorado y sus directores de tesis doctoral. Para obtener los datos se ha diseño un cuestionario web especificando sus ventajas y teniendo en cuenta algunos problemas tradicionales de no cobertura o no respuesta. El cuestionario web se hizo debido a la complejidad que comportan de las preguntas de red social. El cuestionario electrónico permite, mediante una serie de instrucciones, reducir el tiempo para responder y hacerlo menos cargado. Este cuestionario web, además es auto administrado, lo cual nos permite, según la literatura, unas respuestas mas honestas que cuestionario con encuestador. Se analiza la calidad de las preguntas de red social en cuestionario web para datos egocéntricos. Para eso se calcula la fiabilidad y la validez de este tipo de preguntas, por primera vez a través del modelo Multirasgo Multimétodo (Multitrait Multimethod). Al ser datos egocéntricos, se pueden considerar jerárquicos, y por primera vez se una un modelo Multirasgo Multimétodo Multinivel (multilevel Multitrait Multimethod). Las la fiabilidad y validez se pueden obtener a nivel individual (within group component) o a nivel de grupo (between group component) y se usan para llevar a cabo un meta-análisis con otras universidades europeas para analizar ciertas características de diseño del cuestionario. Estas características analizan si para preguntas de red social hechas en cuestionarios web son más fiables y validas hechas "by questions" o "by alters", si son presentes todas las etiquetas de frecuencia para los ítems o solo la del inicio y final, o si es mejor que el diseño del cuestionario esté en con color o blanco y negro. También se analiza la calidad de la red social en conjunto, en este caso específico son los grupos de investigación de la universidad. Se tratan los problemas de los datos ausentes en las redes completas. Se propone una nueva alternativa a la solución típica de la red egocéntrica o los respondientes proxies. Esta nueva alternativa la hemos nombrado "Nosduocentered Network" (red Nosduocentrada), se basa en dos actores centrales en una red. Estimando modelos de regresión, esta "Nosduocentered network" tiene mas poder predictivo para el rendimiento de los estudiantes de doctorado que la red egocéntrica. Además se corrigen las correlaciones de las variables actitudinales por atenuación debido al pequeño tamaño muestral. Finalmente, se hacen regresiones de los tres tipos de variables (background, actitudinales y de red social) y luego se combinan para analizar cual para predice mejor el rendimiento (según publicaciones académicas) de los estudiantes de doctorado. Los resultados nos llevan a predecir el rendimiento académico de los estudiantes de doctorado depende de variables personales (background) i actitudinales. Asimismo, se comparan los resultados obtenidos con otros estudios publicados.
Resumo:
The use of data reconciliation techniques can considerably reduce the inaccuracy of process data due to measurement errors. This in turn results in improved control system performance and process knowledge. Dynamic data reconciliation techniques are applied to a model-based predictive control scheme. It is shown through simulations on a chemical reactor system that the overall performance of the model-based predictive controller is enhanced considerably when data reconciliation is applied. The dynamic data reconciliation techniques used include a combined strategy for the simultaneous identification of outliers and systematic bias.
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
The authors compare the performance of two types of controllers one based on the multilayered network and the other based on the single layered CMAC network (cerebellar model articulator controller). The neurons (information processing units) in the multi-layered network use Gaussian activation functions. The control scheme which is considered is a predictive control algorithm, along the lines used by Willis et al. (1991), Kambhampati and Warwick (1991). The process selected as a test bed is a continuous stirred tank reactor. The reaction taking place is an irreversible exothermic reaction in a constant volume reactor cooled by a single coolant stream. This reactor is a simplified version of the first tank in the two tank system given by Henson and Seborg (1989).
Resumo:
This paper describes an experimental application of constrained predictive control and feedback linearisation based on dynamic neural networks. It also verifies experimentally a method for handling input constraints, which are transformed by the feedback linearisation mappings. A performance comparison with a PID controller is also provided. The experimental system consists of a laboratory based single link manipulator arm, which is controlled in real time using MATLAB/SIMULINK together with data acquisition equipment.
Resumo:
The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.
Predictive models for chronic renal disease using decision trees, naïve bayes and case-based methods
Resumo:
Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Background. The aim of this study was to evaluate the influence of zero-value subtraction on the performance of two laser fluorescence (LF) devices developed to detect occlusal caries. Methods. The authors selected 119 permanent molars. Two examiners assessed three areas (cuspal, middle and cervical) of both mesial and distal portions of the buccal surface and one occlusal site using an LF device and an LF pen. For each tooth, the authors subtracted the value measured in the cuspal, middle and cervical areas in the buccal surface from the value measured in the respective occlusal site. Results. The authors observed differences among the readings for both devices in the cuspal, middle and cervical areas in the buccal surface as well as differences for both devices with and without the zero-value subtraction in the occlusal surface. When the authors did not perform the zero-value subtraction, they found statistically significant differences for sensitivity and accuracy for the LF device. When this was done with the LF pen, specificity increased and sensitivity decreased significantly. Conclusions. For the LF device, the zero-value subtraction decreased the sensitivity. For this reason, the authors concluded that clinicians can obtain measures with the LF device effectively without using zero-value subtraction. For the LF pen, however, the absence of the zero-value subtraction changed both the sensitivity and specificity, and so the authors concluded that clinicians should not eliminate this step from the procedure. Clinical Implications. When using the LF device, clinicians might not need to perform the zero-value subtraction; however, for the LF pen, clinicians should do so.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper presents a modeling effort for developing safety performance models (SPM) for urban intersections for three major Brazilian cities. The proposed methodology for calibrating SPM has been divided into the following steps: defining the safety study objective, choosing predictive variables and sample size, data acquisition, defining model expression and model parameters and model evaluation. Among the predictive variables explored in the calibration phase were exposure variables (AADT), number of lanes, number of approaches and central median status. SPMs were obtained for three cities: Fortaleza, Belo Horizonte and Brasilia. The SPM developed for signalized intersections in Fortaleza and Belo Horizonte had the same structure and the most significant independent variables, which were AADT entering the intersection and number of lanes, and in addition, the coefficient of the best models were in the same range of values. For Brasilia, because of the sample size, the signalized and unsignalized intersections were grouped, and the AADT was split in minor and major approaches, which were the most significant variables. This paper also evaluated SPM transferability to other jurisdiction. The SPM for signalized intersections from Fortaleza and Belo Horizonte have been recalibrated (in terms of the COx) to the city of Porto Alegre. The models were adjusted following the Highway Safety Manual (HSM) calibration procedure and yielded C-x of 0.65 and 2.06 for Fortaleza and Belo Horizonte SPM respectively. This paper showed the experience and future challenges toward the initiatives on development of SPMs in Brazil, that can serve as a guide for other countries that are in the same stage in this subject. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Background. Rest myocardial perfusion imaging (MPI) is effective in managing patients with acute chest pain in developed countries. We aimed to define the role and feasibility of rest MPI in low-to-middle income countries. Methods and Results. Low-to-intermediate risk patients (n = 356) presenting with chest pain to ten centers in eight developing countries were injected with a Tc-99m-based tracer, and standard imaging was performed. The primary outcome was a composite of death, non-fatal myocardial infarction (MI), recurrent angina, and coronary revascularization at 30 days. Sixty-nine patients had a positive MPI (19.4%), and 52 patients (14.6%) had a primary outcome event. An abnormal rest-MPI result was the only variable which independently predicted the primary outcome [adjusted odds ratio (OR) 8.19, 95% confidence interval 4.10-16.40, P = .0001]. The association of MPI result and the primary outcome was stronger (adjusted OR 17.35) when only the patients injected during pain were considered. Rest-MPI had a negative predictive value of 92.7% for the primary outcome, improving to 99.3% for the hard event composite of death or MI. Conclusions. Our study demonstrates that rest-MPI is a reliable test for ruling out MI when applied to patients in developing countries. (J Nucl Cardiol 2012;19:1146-53.)
Resumo:
During the last three decades, several predictive models have been developed to estimate the somatic production of macroinvertebrates. Although the models have been evaluated for their ability to assess the production of macrobenthos in different marine ecosystems, these approaches have not been applied specifically to sandy beach macrofauna and may not be directly applicable to this transitional environment. Hence, in this study, a broad literature review of sandy beach macrofauna production was conducted and estimates obtained with cohort-based and size-based methods were collected. The performance of nine models in estimating the production of individual populations from the sandy beach environment, evaluated for all taxonomic groups combined and for individual groups separately, was assessed, comparing the production predicted by the models to the estimates obtained from the literature (observed production). Most of the models overestimated population production compared to observed production estimates, whether for all populations combined or more specific taxonomic groups. However, estimates by two models developed by Cusson and Bourget provided best fits to measured production, and thus represent the best alternatives to the cohort-based and size-based methods in this habitat. The consistent performance of one of these Cusson and Bourget models, which was developed for the macrobenthos of sandy substrate habitats (C&B-SS), shows that the performance of a model does not depend on whether it was developed for a specific taxonomic group. Moreover, since some widely used models (e.g., the Robertson model) show very different responses when applied to the macrofauna of different marine environments (e.g., sandy beaches and estuaries), prior evaluation of these models is essential.
Resumo:
Introduction Toxoplasmosis may be life-threatening in fetuses and in immune-deficient patients. Conventional laboratory diagnosis of toxoplasmosis is based on the presence of IgM and IgG anti-Toxoplasma gondii antibodies; however, molecular techniques have emerged as alternative tools due to their increased sensitivity. The aim of this study was to compare the performance of 4 PCR-based methods for the laboratory diagnosis of toxoplasmosis. One hundred pregnant women who seroconverted during pregnancy were included in the study. The definition of cases was based on a 12-month follow-up of the infants. Methods Amniotic fluid samples were submitted to DNA extraction and amplification by the following 4 Toxoplasma techniques performed with parasite B1 gene primers: conventional PCR, nested-PCR, multiplex-nested-PCR, and real-time PCR. Seven parameters were analyzed, sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), negative likelihood ratio (NLR) and efficiency (Ef). Results Fifty-nine of the 100 infants had toxoplasmosis; 42 (71.2%) had IgM antibodies at birth but were asymptomatic, and the remaining 17 cases had non-detectable IgM antibodies but high IgG antibody titers that were associated with retinochoroiditis in 8 (13.5%) cases, abnormal cranial ultrasound in 5 (8.5%) cases, and signs/symptoms suggestive of infection in 4 (6.8%) cases. The conventional PCR assay detected 50 cases (9 false-negatives), nested-PCR detected 58 cases (1 false-negative and 4 false-positives), multiplex-nested-PCR detected 57 cases (2 false-negatives), and real-time-PCR detected 58 cases (1 false-negative). Conclusions The real-time PCR assay was the best-performing technique based on the parameters of Se (98.3%), Sp (100%), PPV (100%), NPV (97.6%), PLR (â^ž), NLR (0.017), and Ef (99%).
Resumo:
Constraints are widely present in the flight control problems: actuators saturations or flight envelope limitations are only some examples of that. The ability of Model Predictive Control (MPC) of dealing with the constraints joined with the increased computational power of modern calculators makes this approach attractive also for fast dynamics systems such as agile air vehicles. This PhD thesis presents the results, achieved at the Aerospace Engineering Department of the University of Bologna in collaboration with the Dutch National Aerospace Laboratories (NLR), concerning the development of a model predictive control system for small scale rotorcraft UAS. Several different predictive architectures have been evaluated and tested by means of simulation, as a result of this analysis the most promising one has been used to implement three different control systems: a Stability and Control Augmentation System, a trajectory tracking and a path following system. The systems have been compared with a corresponding baseline controller and showed several advantages in terms of performance, stability and robustness.