14 resultados para network prediction

em Universidad Politécnica de Madrid


Relevância:

60.00% 60.00%

Publicador:

Resumo:

El rebase se define como el transporte de una cantidad importante de agua sobre la coronación de una estructura. Por tanto, es el fenómeno que, en general, determina la cota de coronación del dique dependiendo de la cantidad aceptable del mismo, a la vista de condicionantes funcionales y estructurales del dique. En general, la cantidad de rebase que puede tolerar un dique de abrigo desde el punto de vista de su integridad estructural es muy superior a la cantidad permisible desde el punto de vista de su funcionalidad. Por otro lado, el diseño de un dique con una probabilidad de rebase demasiado baja o nula conduciría a diseños incompatibles con consideraciones de otro tipo, como son las estéticas o las económicas. Existen distintas formas de estudiar el rebase producido por el oleaje sobre los espaldones de las obras marítimas. Las más habituales son los ensayos en modelo físico y las formulaciones empíricas o semi-empíricas. Las menos habituales son la instrumentación en prototipo, las redes neuronales y los modelos numéricos. Los ensayos en modelo físico son la herramienta más precisa y fiable para el estudio específico de cada caso, debido a la complejidad del proceso de rebase, con multitud de fenómenos físicos y parámetros involucrados. Los modelos físicos permiten conocer el comportamiento hidráulico y estructural del dique, identificando posibles fallos en el proyecto antes de su ejecución, evaluando diversas alternativas y todo esto con el consiguiente ahorro en costes de construcción mediante la aportación de mejoras al diseño inicial de la estructura. Sin embargo, presentan algunos inconvenientes derivados de los márgenes de error asociados a los ”efectos de escala y de modelo”. Las formulaciones empíricas o semi-empíricas presentan el inconveniente de que su uso está limitado por la aplicabilidad de las fórmulas, ya que éstas sólo son válidas para una casuística de condiciones ambientales y tipologías estructurales limitadas al rango de lo reproducido en los ensayos. El objetivo de la presente Tesis Doctoral es el contrate de las formulaciones desarrolladas por diferentes autores en materia de rebase en distintas tipologías de diques de abrigo. Para ello, se ha realizado en primer lugar la recopilación y el análisis de las formulaciones existentes para estimar la tasa de rebase sobre diques en talud y verticales. Posteriormente, se llevó a cabo el contraste de dichas formulaciones con los resultados obtenidos en una serie de ensayos realizados en el Centro de Estudios de Puertos y Costas. Para finalizar, se aplicó a los ensayos de diques en talud seleccionados la herramienta neuronal NN-OVERTOPPING2, desarrollada en el proyecto europeo de rebases CLASH (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping”), contrastando de este modo la tasa de rebase obtenida en los ensayos con este otro método basado en la teoría de las redes neuronales. Posteriormente, se analizó la influencia del viento en el rebase. Para ello se han realizado una serie de ensayos en modelo físico a escala reducida, generando oleaje con y sin viento, sobre la sección vertical del Dique de Levante de Málaga. Finalmente, se presenta el análisis crítico del contraste de cada una de las formulaciones aplicadas a los ensayos seleccionados, que conduce a las conclusiones obtenidas en la presente Tesis Doctoral. Overtopping is defined as the volume of water surpassing the crest of a breakwater and reaching the sheltered area. This phenomenon determines the breakwater’s crest level, depending on the volume of water admissible at the rear because of the sheltered area’s functional and structural conditioning factors. The ways to assess overtopping processes range from those deemed to be most traditional, such as semi-empirical or empirical type equations and physical, reduced scale model tests, to others less usual such as the instrumentation of actual breakwaters (prototypes), artificial neural networks and numerical models. Determining overtopping in reduced scale physical model tests is simple but the values obtained are affected to a greater or lesser degree by the effects of a scale model-prototype such that it can only be considered as an approximation to what actually happens. Nevertheless, physical models are considered to be highly useful for estimating damage that may occur in the area sheltered by the breakwater. Therefore, although physical models present certain problems fundamentally deriving from scale effects, they are still the most accurate, reliable tool for the specific study of each case, especially when large sized models are adopted and wind is generated Empirical expressions obtained from laboratory tests have been developed for calculating the overtopping rate and, therefore, the formulas obtained obviously depend not only on environmental conditions – wave height, wave period and water level – but also on the model’s characteristics and are only applicable in a range of validity of the tests performed in each case. The purpose of this Thesis is to make a comparative analysis of methods for calculating overtopping rates developed by different authors for harbour breakwater overtopping. First, existing equations were compiled and analysed in order to estimate the overtopping rate on sloping and vertical breakwaters. These equations were then compared with the results obtained in a number of tests performed in the Centre for Port and Coastal Studies of the CEDEX. In addition, a neural network model developed in the European CLASH Project (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping“) was also tested. Finally, the wind effects on overtopping are evaluated using tests performed with and without wind in the physical model of the Levante Breakwater (Málaga).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Salamanca has been considered among the most polluted cities in Mexico. The vehicular park, the industry and the emissions produced by agriculture, as well as orography and climatic characteristics have propitiated the increment in pollutant concentration of Particulate Matter less than 10 μg/m3 in diameter (PM10). In this work, a Multilayer Perceptron Neural Network has been used to make the prediction of an hour ahead of pollutant concentration. A database used to train the Neural Network corresponds to historical time series of meteorological variables (wind speed, wind direction, temperature and relative humidity) and air pollutant concentrations of PM10. Before the prediction, Fuzzy c-Means clustering algorithm have been implemented in order to find relationship among pollutant and meteorological variables. These relationship help us to get additional information that will be used for predicting. Our experiments with the proposed system show the importance of this set of meteorological variables on the prediction of PM10 pollutant concentrations and the neural network efficiency. The performance estimation is determined using the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The results shown that the information obtained in the clustering step allows a prediction of an hour ahead, with data from past 2 hours

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An aerodynamic optimization of the train aerodynamic characteristics in term of front wind action sensitivity is carried out in this paper. In particular, a genetic algorithm (GA) is used to perform a shape optimization study of a high-speed train nose. The nose is parametrically defined via Bézier Curves, including a wider range of geometries in the design space as possible optimal solutions. Using a GA, the main disadvantage to deal with is the large number of evaluations need before finding such optimal. Here it is proposed the use of metamodels to replace Navier-Stokes solver. Among all the posibilities, Rsponse Surface Models and Artificial Neural Networks (ANN) are considered. Best results of prediction and generalization are obtained with ANN and those are applied in GA code. The paper shows the feasibility of using GA in combination with ANN for this problem, and solutions achieved are included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein interaction networks have become a tool to study biological processes, either for predicting molecular functions or for designing proper new drugs to regulate the main biological interactions. Furthermore, such networks are known to be organized in sub-networks of proteins contributing to the same cellular function. However, the protein function prediction is not accurate and each protein has traditionally been assigned to only one function by the network formalism. By considering the network of the physical interactions between proteins of the yeast together with a manual and single functional classification scheme, we introduce a method able to reveal important information on protein function, at both micro- and macro-scale. In particular, the inspection of the properties of oscillatory dynamics on top of the protein interaction network leads to the identification of misclassification problems in protein function assignments, as well as to unveil correct identification of protein functions. We also demonstrate that our approach can give a network representation of the meta-organization of biological processes by unraveling the interactions between different functional classes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson’s Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson’s patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson’s disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last ten years, Salamanca has been considered among the most polluted cities in México. This paper presents a Self-Organizing Maps (SOM) Neural Network application to classify pollution data and automatize the air pollution level determination for Sulphur Dioxide (SO2) in Salamanca. Meteorological parameters are well known to be important factors contributing to air quality estimation and prediction. In order to observe the behavior and clarify the influence of wind parameters on the SO2 concentrations a SOM Neural Network have been implemented along a year. The main advantages of the SOM is that it allows to integrate data from different sensors and provide readily interpretation results. Especially, it is powerful mapping and classification tool, which others information in an easier way and facilitates the task of establishing an order of priority between the distinguished groups of concentrations depending on their need for further research or remediation actions in subsequent management steps. The results show a significative correlation between pollutant concentrations and some environmental variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Salamanca, situated in center of Mexico is among the cities which suffer most from the air pollution in Mexico. The vehicular park and the industry, as well as orography and climatic characteristics have propitiated the increment in pollutant concentration of Sulphur Dioxide (SO2). In this work, a Multilayer Perceptron Neural Network has been used to make the prediction of an hour ahead of pollutant concentration. A database used to train the Neural Network corresponds to historical time series of meteorological variables and air pollutant concentrations of SO2. Before the prediction, Fuzzy c-Means and K-means clustering algorithms have been implemented in order to find relationship among pollutant and meteorological variables. Our experiments with the proposed system show the importance of this set of meteorological variables on the prediction of SO2 pollutant concentrations and the neural network efficiency. The performance estimation is determined using the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The results showed that the information obtained in the clustering step allows a prediction of an hour ahead, with data from past 2 hours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last years significant efforts have been devoted to the development of advanced data analysis tools to both predict the occurrence of disruptions and to investigate the operational spaces of devices, with the long term goal of advancing the understanding of the physics of these events and to prepare for ITER. On JET the latest generation of the disruption predictor called APODIS has been deployed in the real time network during the last campaigns with the new metallic wall. Even if it was trained only with discharges with the carbon wall, it has reached very good performance, with both missed alarms and false alarms in the order of a few percent (and strategies to improve the performance have already been identified). Since for the optimisation of the mitigation measures, predicting also the type of disruption is considered to be also very important, a new clustering method, based on the geodesic distance on a probabilistic manifold, has been developed. This technique allows automatic classification of an incoming disruption with a success rate of better than 85%. Various other manifold learning tools, particularly Principal Component Analysis and Self Organised Maps, are also producing very interesting results in the comparative analysis of JET and ASDEX Upgrade (AUG) operational spaces, on the route to developing predictors capable of extrapolating from one device to another.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disruptions in tokamaks devices are unavoidable, and they can have a significant impact on machine integrity. So it is very important have mechanisms to predict this phenomenon. Disruption prediction is a very complex task, not only because it is a multi-dimensional problem, but also because in order to be effective, it has to detect well in advance the actual disruptive event, in order to be able to use successful mitigation strategies. With these constraints in mind a real-time disruption predictor has been developed to be used in JET tokamak. The predictor has been designed to run in the Multithreaded Application Real-Time executor (MARTe) framework. The predictor ?Advanced Predictor Of DISruptions? (APODIS) is based on Support Vector Machine (SVM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective The main purpose of this research is the novel use of artificial metaplasticity on multilayer perceptron (AMMLP) as a data mining tool for prediction the outcome of patients with acquired brain injury (ABI) after cognitive rehabilitation. The final goal aims at increasing knowledge in the field of rehabilitation theory based on cognitive affectation. Methods and materials The data set used in this study contains records belonging to 123 ABI patients with moderate to severe cognitive affectation (according to Glasgow Coma Scale) that underwent rehabilitation at Institut Guttmann Neurorehabilitation Hospital (IG) using the tele-rehabilitation platform PREVIRNEC©. The variables included in the analysis comprise the neuropsychological initial evaluation of the patient (cognitive affectation profile), the results of the rehabilitation tasks performed by the patient in PREVIRNEC© and the outcome of the patient after a 3–5 months treatment. To achieve the treatment outcome prediction, we apply and compare three different data mining techniques: the AMMLP model, a backpropagation neural network (BPNN) and a C4.5 decision tree. Results The prediction performance of the models was measured by ten-fold cross validation and several architectures were tested. The results obtained by the AMMLP model are clearly superior, with an average predictive performance of 91.56%. BPNN and C4.5 models have a prediction average accuracy of 80.18% and 89.91% respectively. The best single AMMLP model provided a specificity of 92.38%, a sensitivity of 91.76% and a prediction accuracy of 92.07%. Conclusions The proposed prediction model presented in this study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients. The ability to predict treatment outcomes may provide new insights toward improving effectiveness and creating personalized therapeutic interventions based on clinical evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We apply diffusion strategies to propose a cooperative reinforcement learning algorithm, in which agents in a network communicate with their neighbors to improve predictions about their environment. The algorithm is suitable to learn off-policy even in large state spaces. We provide a mean-square-error performance analysis under constant step-sizes. The gain of cooperation in the form of more stability and less bias and variance in the prediction error, is illustrated in the context of a classical model. We show that the improvement in performance is especially significant when the behavior policy of the agents is different from the target policy under evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of the Parkinson's disease and its treatment on the patients' health-related quality of life can be estimated either by means of generic measures such as the european quality of Life-5 Dimensions (EQ-5D) or specific measures such as the 8-item Parkinson's disease questionnaire (PDQ-8). In clinical studies, PDQ-8 could be used in detriment of EQ-5D due to the lack of resources, time or clinical interest in generic measures. Nevertheless, PDQ-8 cannot be applied in cost-effectiveness analyses which require generic measures and quantitative utility scores, such as EQ-5D. To deal with this problem, a commonly used solution is the prediction of EQ-5D from PDQ-8. In this paper, we propose a new probabilistic method to predict EQ-5D from PDQ-8 using multi-dimensional Bayesian network classifiers. Our approach is evaluated using five-fold cross-validation experiments carried out on a Parkinson's data set containing 488 patients, and is compared with two additional Bayesian network-based approaches, two commonly used mapping methods namely, ordinary least squares and censored least absolute deviations, and a deterministic model. Experimental results are promising in terms of predictive performance as well as the identification of dependence relationships among EQ-5D and PDQ-8 items that the mapping approaches are unable to detect

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An innovative background modeling technique that is able to accurately segment foreground regions in RGB-D imagery (RGB plus depth) has been presented in this paper. The technique is based on a Bayesian framework that efficiently fuses different sources of information to segment the foreground. In particular, the final segmentation is obtained by considering a prediction of the foreground regions, carried out by a novel Bayesian Network with a depth-based dynamic model, and, by considering two independent depth and color-based mixture of Gaussians background models. The efficient Bayesian combination of all these data reduces the noise and uncertainties introduced by the color and depth features and the corresponding models. As a result, more compact segmentations, and refined foreground object silhouettes are obtained. Experimental results with different databases suggest that the proposed technique outperforms existing state-of-the-art algorithms.