978 resultados para Prediction algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, the quality of the Indonesian national road network is inadequate due to several constraints, including overcapacity and overloaded trucks. The high deterioration rate of the road infrastructure in developing countries along with major budgetary restrictions and high growth in traffic have led to an emerging need for improving the performance of the highway maintenance system. However, the high number of intervening factors and their complex effects require advanced tools to successfully solve this problem. The high learning capabilities of Data Mining (DM) are a powerful solution to this problem. In the past, these tools have been successfully applied to solve complex and multi-dimensional problems in various scientific fields. Therefore, it is expected that DM can be used to analyze the large amount of data regarding the pavement and traffic, identify the relationship between variables, and provide information regarding the prediction of the data. In this paper, we present a new approach to predict the International Roughness Index (IRI) of pavement based on DM techniques. DM was used to analyze the initial IRI data, including age, Equivalent Single Axle Load (ESAL), crack, potholes, rutting, and long cracks. This model was developed and verified using data from an Integrated Indonesia Road Management System (IIRMS) that was measured with the National Association of Australian State Road Authorities (NAASRA) roughness meter. The results of the proposed approach are compared with the IIRMS analytical model adapted to the IRI, and the advantages of the new approach are highlighted. We show that the novel data-driven model is able to learn (with high accuracy) the complex relationships between the IRI and the contributing factors of overloaded trucks

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed data aggregation is an important task, allowing the de- centralized determination of meaningful global properties, that can then be used to direct the execution of other applications. The resulting val- ues result from the distributed computation of functions like count, sum and average. Some application examples can found to determine the network size, total storage capacity, average load, majorities and many others. In the last decade, many di erent approaches have been pro- posed, with di erent trade-o s in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of ag- gregation algorithms, it can be di cult and time consuming to determine which techniques will be more appropriate to use in speci c settings, jus- tifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally de nes the concept of aggrega- tion, characterizing the di erent types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of genome-scale metabolic models has been rapidly increasing in fields such as metabolic engineering. An important part of a metabolic model is the biomass equation since this reaction will ultimately determine the predictive capacity of the model in terms of essentiality and flux distributions. Thus, in order to obtain a reliable metabolic model the biomass precursors and their coefficients must be as precise as possible. Ideally, determination of the biomass composition would be performed experimentally, but when no experimental data are available this is established by approximation to closely related organisms. Computational methods however, can extract some information from the genome such as amino acid and nucleotide compositions. The main objectives of this study were to compare the biomass composition of several organisms and to evaluate how biomass precursor coefficients affected the predictability of several genome-scale metabolic models by comparing predictions with experimental data in literature. For that, the biomass macromolecular composition was experimentally determined and the amino acid composition was both experimentally and computationally estimated for several organisms. Sensitivity analysis studies were also performed with the Escherichia coli iAF1260 metabolic model concerning specific growth rates and flux distributions. The results obtained suggest that the macromolecular composition is conserved among related organisms. Contrasting, experimental data for amino acid composition seem to have no similarities for related organisms. It was also observed that the impact of macromolecular composition on specific growth rates and flux distributions is larger than the impact of amino acid composition, even when data from closely related organisms are used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Series: Solid mechanics and its applications, vol. 226"

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En este proyecto se desarrollarán algoritmos numéricos para sistemas no lineales hiperbólicos-parabólicos de ecuaciones diferenciales en derivadas parciales. Dichos sistemas tienen aplicación en propagación de ondas en ámbitos aeroespaciales y astrofísicos.Objetivos generales: 1)Desarrollo y mejora de algoritmos numéricos con la finalidad de incrementar la calidad en la simulación de propagación e interacción de ondas gasdinámicas y magnetogasdinámicas no lineales. 2)Desarrollo de códigos computacionales con la finalidad de simular flujos gasdinámicos de elevada entalpía incluyendo cambios químicos, efectos dispersivos y difusivos.3)Desarrollo de códigos computacionales con la finalidad de simular flujos magnetogasdinámicos ideales y reales.4)Aplicación de los nuevos algoritmos y códigos computacionales a la solución del flujo aerotermodinámico alrededor de cuerpos que ingresan en la atmósfera terrestre. 5)Aplicación de los nuevos algoritmos y códigos computacionales a la simulación del comportamiento dinámico no lineal de arcos magnéticos en la corona solar. 6)Desarrollo de nuevos modelos para describir el comportamiento no lineal de arcos magnéticos en la corona solar.Este proyecto presenta como objetivo principal la introducción de mejoras en algoritmos numéricos para simular la propagación e interacción de ondas no lineales en dos medios gaseosos: aquellos que no poseen carga eléctrica libre (flujos gasdinámicos) y aquellos que tienen carga eléctrica libre (flujos magnetogasdinámicos). Al mismo tiempo se desarrollarán códigos computacionales que implementen las mejoras de las técnicas numéricas.Los algoritmos numéricos se aplicarán con la finalidad de incrementar el conocimiento en tópicos de interés en la ingeniería aeroespacial como es el cálculo del flujo de calor y fuerzas aerotermodinámicas que soportan objetos que ingresan a la atmósfera terrestre y en temas de astrofísica como la propagación e interacción de ondas, tanto para la transferencia de energía como para la generación de inestabilidades en arcos magnéticos de la corona solar. Estos dos temas poseen en común las técnicas y algoritmos numéricos con los que serán tratados. Las ecuaciones gasdinámicas y magnetogasdinámicas ideales conforman sistemas hiperbólicos de ecuaciones diferenciales y pueden ser solucionados utilizando "Riemann solvers" junto con el método de volúmenes finitos (Toro 1999; Udrea 1999; LeVeque 1992 y 2005). La inclusión de efectos difusivos genera que los sistemas de ecuaciones resulten hiperbólicos-parabólicos. La contribución parabólica puede ser considerada como términos fuentes y tratada adicionalmente tanto en forma explícita como implícita (Udrea 1999; LeVeque 2005).Para analizar el flujo alrededor de cuerpos que ingresan en la atmósfera se utilizarán las ecuaciones de Navier-Stokes químicamente activas, mientras la temperatura no supere los 6000K. Para mayores temperaturas es necesario considerar efectos de ionización (Anderson, 1989). Tanto los efectos difusivos como los cambios químicos serán considerados como términos fuentes en las ecuaciones de Euler. Para tratar la propagación de ondas, transferencia de energía e inestabilidades en arcos magnéticos de la corona solar se utilizarán las ecuaciones de la magnetogasdinámica ideal y real. En este caso será también conveniente implementar términos fuente para el tratamiento de fenómenos de transporte como el flujo de calor y el de radiación. Los códigos utilizarán la técnica de volúmenes finitos, junto con esquemas "Total Variation Disminishing - TVD" sobre mallas estructuradas y no estructuradas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En nuestro proyecto anterior aproximamos el cálculo de una integral definida con integrandos de grandes variaciones funcionales. Nuestra aproximación paraleliza el algoritmo de cómputo de un método adaptivo de cuadratura, basado en reglas de Newton-Cote. Los primeros resultados obtenidos fueron comunicados en distintos congresos nacionales e internacionales; ellos nos permintieron comenzar con una tipificación de las reglas de cuadratura existentes y una clasificación de algunas funciones utilizadas como funciones de prueba. Estas tareas de clasificación y tipificación no las hemos finalizado, por lo que pretendemos darle continuidad a fin de poder informar sobre la conveniencia o no de utilizar nuestra técnica. Para llevar adelante esta tarea se buscará una base de funciones de prueba y se ampliará el espectro de reglas de cuadraturas a utilizar. Además, nos proponemos re-estructurar el cálculo de algunas rutinas que intervienen en el cómputo de la mínima energía de una molécula. Este programa ya existe en su versión secuencial y está modelizado utilizando la aproximación LCAO. El mismo obtiene resultados exitosos en cuanto a precisión, comparado con otras publicaciones internacionales similares, pero requiere de un tiempo de cálculo significativamente alto. Nuestra propuesta es paralelizar el algoritmo mencionado abordándolo al menos en dos niveles: 1- decidir si conviene distribuir el cálculo de una integral entre varios procesadores o si será mejor distribuir distintas integrales entre diferentes procesadores. Debemos recordar que en los entornos de arquitecturas paralelas basadas en redes (típicamente redes de área local, LAN) el tiempo que ocupa el envío de mensajes entre los procesadores es muy significativo medido en cantidad de operaciones de cálculo que un procesador puede completar. 2- de ser necesario, paralelizar el cálculo de integrales dobles y/o triples. Para el desarrollo de nuestra propuesta se desarrollarán heurísticas para verificar y construir modelos en los casos mencionados tendientes a mejorar las rutinas de cálculo ya conocidas. A la vez que se testearán los algoritmos con casos de prueba. La metodología a utilizar es la habitual en Cálculo Numérico. Con cada propuesta se requiere: a) Implementar un algoritmo de cálculo tratando de lograr versiones superadoras de las ya existentes. b) Realizar los ejercicios de comparación con las rutinas existentes para confirmar o desechar una mejor perfomance numérica. c) Realizar estudios teóricos de error vinculados al método y a la implementación. Se conformó un equipo interdisciplinario integrado por investigadores tanto de Ciencias de la Computación como de Matemática. Metas a alcanzar Se espera obtener una caracterización de las reglas de cuadratura según su efectividad, con funciones de comportamiento oscilatorio y con decaimiento exponencial, y desarrollar implementaciones computacionales adecuadas, optimizadas y basadas en arquitecturas paralelas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As digital imaging processing techniques become increasingly used in a broad range of consumer applications, the critical need to evaluate algorithm performance has become recognised by developers as an area of vital importance. With digital image processing algorithms now playing a greater role in security and protection applications, it is of crucial importance that we are able to empirically study their performance. Apart from the field of biometrics little emphasis has been put on algorithm performance evaluation until now and where evaluation has taken place, it has been carried out in a somewhat cumbersome and unsystematic fashion, without any standardised approach. This paper presents a comprehensive testing methodology and framework aimed towards automating the evaluation of image processing algorithms. Ultimately, the test framework aims to shorten the algorithm development life cycle by helping to identify algorithm performance problems quickly and more efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: According to some international studies, patients with acute coronary syndrome (ACS) and increased left atrial volume index (LAVI) have worse long-term prognosis. However, national Brazilian studies confirming this prediction are still lacking. Objective: To evaluate LAVI as a predictor of major cardiovascular events (MCE) in patients with ACS during a 365-day follow-up. Methods: Prospective cohort of 171 patients diagnosed with ACS whose LAVI was calculated within 48 hours after hospital admission. According to LAVI, two groups were categorized: normal LAVI (≤ 32 mL/m2) and increased LAVI (> 32 mL/m2). Both groups were compared regarding clinical and echocardiographic characteristics, in- and out-of-hospital outcomes, and occurrence of ECM in up to 365 days. Results: Increased LAVI was observed in 78 patients (45%), and was associated with older age, higher body mass index, hypertension, history of myocardial infarction and previous angioplasty, and lower creatinine clearance and ejection fraction. During hospitalization, acute pulmonary edema was more frequent in patients with increased LAVI (14.1% vs. 4.3%, p = 0.024). After discharge, the occurrence of combined outcome for MCE was higher (p = 0.001) in the group with increased LAVI (26%) as compared to the normal LAVI group (7%) [RR (95% CI) = 3.46 (1.54-7.73) vs. 0.80 (0.69-0.92)]. After Cox regression, increased LAVI increased the probability of MCE (HR = 3.08, 95% CI = 1.28-7.40, p = 0.012). Conclusion: Increased LAVI is an important predictor of MCE in a one-year follow-up.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The equations predicting maximal oxygen uptake (VO2max or peak) presently in use in cardiopulmonary exercise testing (CPET) softwares in Brazil have not been adequately validated. These equations are very important for the diagnostic capacity of this method. Objective: Build and validate a Brazilian Equation (BE) for prediction of VO2peak in comparison to the equation cited by Jones (JE) and the Wasserman algorithm (WA). Methods: Treadmill evaluation was performed on 3119 individuals with CPET (breath by breath). The construction group (CG) of the equation consisted of 2495 healthy participants. The other 624 individuals were allocated to the external validation group (EVG). At the BE (derived from a multivariate regression model), age, gender, body mass index (BMI) and physical activity level were considered. The same equation was also tested in the EVG. Dispersion graphs and Bland-Altman analyses were built. Results: In the CG, the mean age was 42.6 years, 51.5% were male, the average BMI was 27.2, and the physical activity distribution level was: 51.3% sedentary, 44.4% active and 4.3% athletes. An optimal correlation between the BE and the CPET measured VO2peak was observed (0.807). On the other hand, difference came up between the average VO2peak expected by the JE and WA and the CPET measured VO2peak, as well as the one gotten from the BE (p = 0.001). Conclusion: BE presents VO2peak values close to those directly measured by CPET, while Jones and Wasserman differ significantly from the real VO2peak.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Studies have demonstrated the diagnostic accuracy and prognostic value of physical stress echocardiography in coronary artery disease. However, the prediction of mortality and major cardiac events in patients with exercise test positive for myocardial ischemia is limited. Objective: To evaluate the effectiveness of physical stress echocardiography in the prediction of mortality and major cardiac events in patients with exercise test positive for myocardial ischemia. Methods: This is a retrospective cohort in which 866 consecutive patients with exercise test positive for myocardial ischemia, and who underwent physical stress echocardiography were studied. Patients were divided into two groups: with physical stress echocardiography negative (G1) or positive (G2) for myocardial ischemia. The endpoints analyzed were all-cause mortality and major cardiac events, defined as cardiac death and non-fatal acute myocardial infarction. Results: G2 comprised 205 patients (23.7%). During the mean 85.6 ± 15.0-month follow-up, there were 26 deaths, of which six were cardiac deaths, and 25 non-fatal myocardial infarction cases. The independent predictors of mortality were: age, diabetes mellitus, and positive physical stress echocardiography (hazard ratio: 2.69; 95% confidence interval: 1.20 - 6.01; p = 0.016). The independent predictors of major cardiac events were: age, previous coronary artery disease, positive physical stress echocardiography (hazard ratio: 2.75; 95% confidence interval: 1.15 - 6.53; p = 0.022) and absence of a 10% increase in ejection fraction. All-cause mortality and the incidence of major cardiac events were significantly higher in G2 (p < 0. 001 and p = 0.001, respectively). Conclusion: Physical stress echocardiography provides additional prognostic information in patients with exercise test positive for myocardial ischemia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Mathematik, Habil.-Schr., 2006

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data Mining, Vision Restoration, Treatment outcome prediction, Self-Organising-Map

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Background: Hemorheological and glycemic parameters and high density lipoprotein (HDL) cholesterol are used as biomarkers of atherosclerosis and thrombosis. Objective: To investigate the association and clinical relevance of erythrocyte sedimentation rate (ESR), fibrinogen, fasting glucose, glycated hemoglobin (HbA1c), and HDL cholesterol in the prediction of major adverse cardiovascular events (MACE) and coronary heart disease (CHD) in an outpatient population. Methods: 708 stable patients who visited the outpatient department were enrolled and followed for a mean period of 28.5 months. Patients were divided into two groups, patients without MACE and patients with MACE, which included cardiac death, acute myocardial infarction, newly diagnosed CHD, and cerebral vascular accident. We compared hemorheological and glycemic parameters and lipid profiles between the groups. Results: Patients with MACE had significantly higher ESR, fibrinogen, fasting glucose, and HbA1c, while lower HDL cholesterol compared with patients without MACE. High ESR and fibrinogen and low HDL cholesterol significantly increased the risk of MACE in multivariate regression analysis. In patients with MACE, high fibrinogen and HbA1c levels increased the risk of multivessel CHD. Furthermore, ESR and fibrinogen were significantly positively correlated with HbA1c and negatively correlated with HDL cholesterol, however not correlated with fasting glucose. Conclusion: Hemorheological abnormalities, poor glycemic control, and low HDL cholesterol are correlated with each other and could serve as simple and useful surrogate markers and predictors for MACE and CHD in outpatients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Background: Heart disease in pregnancy is the leading cause of non- obstetric maternal death. Few Brazilian studies have assessed the impact of heart disease during pregnancy. Objective: To determine the risk factors associated with cardiovascular and neonatal complications. Methods: We evaluated 132 pregnant women with heart disease at a High-Risk Pregnancy outpatient clinic, from January 2005 to July 2010. Variables that could influence the maternal-fetal outcome were selected: age, parity, smoking, etiology and severity of the disease, previous cardiac complications, cyanosis, New York Heart Association (NYHA) functional class > II, left ventricular dysfunction/obstruction, arrhythmia, drug treatment change, time of prenatal care beginning and number of prenatal visits. The maternal-fetal risk index, Cardiac Disease in Pregnancy (CARPREG), was retrospectively calculated at the beginning of prenatal care, and patients were stratified in its three risk categories. Results: Rheumatic heart disease was the most prevalent (62.12%). The most frequent complications were heart failure (11.36%) and arrhythmias (6.82%). Factors associated with cardiovascular complications on multivariate analysis were: drug treatment change (p = 0.009), previous cardiac complications (p = 0.013) and NYHA class III on the first prenatal visit (p = 0.041). The cardiovascular complication rates were 15.22% in CARPREG 0, 16.42% in CARPREG 1, and 42.11% in CARPREG > 1, differing from those estimated by the original index: 5%, 27% and 75%, respectively. This sample had 26.36% of prematurity. Conclusion: The cardiovascular complication risk factors in this population were drug treatment change, previous cardiac complications and NYHA class III at the beginning of prenatal care. The CARPREG index used in this sample composed mainly of patients with rheumatic heart disease overestimated the number of events in pregnant women classified as CARPREG 1 and > 1, and underestimated it in low-risk patients (CARPREG 0).