974 resultados para test de Monte Carlo
Resumo:
Introduction: Schizophrenia patients frequently suffer from complex motor abnormalities including fine and gross motor disturbances, abnormal involuntary movements, neurological soft signs and parkinsonism. These symptoms occur early in the course of the disease, continue in chronic patients and may deteriorate with antipsychotic medication. Furthermore gesture performance is impaired in patients, including the pantomime of tool use. Whether schizophrenia patients would show difficulties of actual tool use has not yet been investigated. Human tool use is complex and relies on a network of distinct and distant brain areas. We therefore aim to test if schizophrenia patients had difficulties in tool use and to assess associations with structural brain imaging using voxel based morphometry (VBM) and tract based spatial statistics (TBSS). Methode: In total, 44 patients with schizophrenia (DSM-5 criteria; 59% men, mean age 38) underwent structural MR imaging and performed the Tool-Use test. The test examines the use of a scoop and a hammer in three conditions: pantomime (without the tool), demonstration (with the tool) and actual use (with a recipient object). T1-weighted images were processed using SPM8 and DTI-data using FSL TBSS routines. To assess structural alterations of impaired tool use we first compared gray matter (GM) volume in VBM and white matter (WM) integrity in TBSS data of patients with and without difficulties of actual tool use. Next we explored correlations of Tool use scores and VBM and TBSS data. Group comparisons were family wise error corrected for multiple tests. Correlations were uncorrected (p < 0.001) with a minimum cluster threshold of 17 voxels (equivalent to a map-wise false positive rate of alpha < 0.0001 using a Monte Carlo procedure). Results: Tool use was impaired in schizophrenia (43.2% pantomime, 11.6% demonstration, 11.6% use). Impairment was related to reduced GM volume and WM integrity. Whole brain analyses detected an effect in the SMA in group analysis. Correlations of tool use scores and brain structure revealed alterations in brain areas of the dorso-dorsal pathway (superior occipital gyrus, superior parietal lobule, and dorsal premotor area) and the ventro-dorsal pathways (middle occipital gyrus, inferior parietal lobule) the action network, as well as the insula and the left hippocampus. Furthermore, significant correlations within connecting fiber tracts - particularly alterations within the bilateral corona radiata superior and anterior as well as the corpus callosum -were associated with Tool use performance. Conclusions: Tool use performance was impaired in schizophrenia, which was associated with reduced GM volume in the action network. Our results are in line with reports of impaired tool use in patients with brain lesions particularly of the dorso-dorsal and ventro-dorsal stream of the action network. In addition an effect of tool use on WM integrity was shown within fiber tracts connecting regions important for planning and executing tool use. Furthermore, hippocampus is part of a brain system responsible for spatial memory and navigation.The results suggest that structural brain alterations in the common praxis network contribute to impaired tool use in schizophrenia.
Resumo:
I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.
Resumo:
Antihydrogen holds the promise to test, for the first time, the universality of freefall with a system composed entirely of antiparticles. The AEgIS experiment at CERN’s antiproton decelerator aims to measure the gravitational interaction between matter and antimatter by measuring the deflection of a beam of antihydrogen in the Earths gravitational field (g). The principle of the experiment is as follows: cold antihydrogen atoms are synthesized in a Penning-Malberg trap and are Stark accelerated towards a moir´e deflectometer, the classical counterpart of an atom interferometer, and annihilate on a position sensitive detector. Crucial to the success of the experiment is the spatial precision of the position sensitive detector.We propose a novel free-fall detector based on a hybrid of two technologies: emulsion detectors, which have an intrinsic spatial resolution of 50 nm but no temporal information, and a silicon strip / scintillating fiber tracker to provide timing and positional information. In 2012 we tested emulsion films in vacuum with antiprotons from CERN’s antiproton decelerator. The annihilation vertices could be observed directly on the emulsion surface using the microscope facility available at the University of Bern. The annihilation vertices were successfully reconstructed with a resolution of 1–2 μmon the impact parameter. If such a precision can be realized in the final detector, Monte Carlo simulations suggest of order 500 antihydrogen annihilations will be sufficient to determine gwith a 1 % accuracy. This paper presents current research towards the development of this technology for use in the AEgIS apparatus and prospects for the realization of the final detector.
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^
Resumo:
A stratigraphy-based chronology for the North Greenland Eemian Ice Drilling (NEEM) ice core has been derived by transferring the annual layer counted Greenland Ice Core Chronology 2005 (GICC05) and its model extension (GICC05modelext) from the NGRIP core to the NEEM core using 787 match points of mainly volcanic origin identified in the electrical conductivity measurement (ECM) and dielectrical profiling (DEP) records. Tephra horizons found in both the NEEM and NGRIP ice cores are used to test the matching based on ECM and DEP and provide five additional horizons used for the timescale transfer. A thinning function reflecting the accumulated strain along the core has been determined using a Dansgaard-Johnsen flow model and an isotope-dependent accumulation rate parameterization. Flow parameters are determined from Monte Carlo analysis constrained by the observed depth-age horizons. In order to construct a chronology for the gas phase, the ice age-gas age difference (Delta age) has been reconstructed using a coupled firn densification-heat diffusion model. Temperature and accumulation inputs to the Delta age model, initially derived from the water isotope proxies, have been adjusted to optimize the fit to timing constraints from d15N of nitrogen and high-resolution methane data during the abrupt onset of Greenland interstadials. The ice and gas chronologies and the corresponding thinning function represent the first chronology for the NEEM core, named GICC05modelext-NEEM-1. Based on both the flow and firn modelling results, the accumulation history for the NEEM site has been reconstructed. Together, the timescale and accumulation reconstruction provide the necessary basis for further analysis of the records from NEEM.
Resumo:
With a combination of the Direct Simulation Monte Carlo (DSMC) calculation and test particle computation, the ballistic transport process of the hydroxyl radicals and oxygen atoms produced by photodissociation of water molecules in the coma of comet 67P/Churyumov-Gerasimenko is modelled. We discuss the key elements and essential features of such simulations which results can be compared with the remote-sensing and in situ measurements of cometary gas coma from the Rosetta mission at different orbital phases of this comet.
Resumo:
This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.
Resumo:
All meta-analyses should include a heterogeneity analysis. Even so, it is not easy to decide whether a set of studies are homogeneous or heterogeneous because of the low statistical power of the statistics used (usually the Q test). Objective: Determine a set of rules enabling SE researchers to find out, based on the characteristics of the experiments to be aggregated, whether or not it is feasible to accurately detect heterogeneity. Method: Evaluate the statistical power of heterogeneity detection methods using a Monte Carlo simulation process. Results: The Q test is not powerful when the meta-analysis contains up to a total of about 200 experimental subjects and the effect size difference is less than 1. Conclusions: The Q test cannot be used as a decision-making criterion for meta-analysis in small sample settings like SE. Random effects models should be used instead of fixed effects models. Caution should be exercised when applying Q test-mediated decomposition into subgroups.
Resumo:
This project investigates the utility of differential algebra (DA) techniques applied to the problem of orbital dynamics with initial uncertainties in the orbital determination of the involved bodies. The use of DA theory allows the splitting of a common Monte Carlo simulation in two parts: the generation of a Taylor map of the final states with regard to the perturbation in the initial coordinates, and the evaluation of the map for many points. A propagator is implemented exploiting DA techniques, and tested in the field of asteroid impact risk monitoring with the potentially hazardous 2011 AG5 and 2007 VK184 as test cases. Results show that the new method is able to simulate 2.5 million trajectories with a precision good enough for the impact probability to be accurately reproduced, while running much faster than a traditional Monte Carlo approach (in 1 and 2 days, respectively).
Resumo:
La comparación de las diferentes ofertas presentadas en la licitación de un proyecto,con el sistema de contratación tradicional de medición abierta y precio unitario cerrado, requiere herramientas de análisis que sean capaces de discriminar propuestas que teniendo un importe global parecido pueden presentar un impacto económico muy diferente durante la ejecución. Una de las situaciones que no se detecta fácilmente con los métodos tradicionales es el comportamiento del coste real frente a las variaciones de las cantidades realmente ejecutadas en obra respecto de las estimadas en el proyecto. Este texto propone abordar esta situación mediante un sistema de análisis cuantitativo del riesgo como el método de Montecarlo. Este procedimiento, como es sabido, consiste en permitir que los datos de entrada que definen el problema varíen unas funciones de probabilidad definidas, generar un gran número de casos de prueba y tratar los resultados estadísticamente para obtener los valores finales más probables,con los parámetros necesarios para medir la fiabilidad de la estimación. Se presenta un modelo para la comparación de ofertas, desarrollado de manera que puede aplicarse en casos reales aplicando a los datos conocidos unas condiciones de variación que sean fáciles de establecer por los profesionales que realizan estas tareas. ABSTRACT: The comparison of the different bids in the tender for a project, with the traditional contract system based on unit rates open to and re-measurement, requires analysis tools that are able to discriminate proposals having a similar overall economic impact, but that might show a very different behaviour during the execution of the works. One situation not easily detected by traditional methods is the reaction of the actual cost to the changes in the exact quantity of works finally executed respect of the work estimated in the project. This paper intends to address this situation through the Monte Carlo method, a system of quantitative risk analysis. This procedure, as is known, is allows the input data defining the problem to vary some within well defined probability functions, generating a large number of test cases, the results being statistically treated to obtain the most probable final values, with the rest of the parameters needed to measure the reliability of the estimate. We present a model for the comparison of bids, designed in a way that it can be applied in real cases, based on data and assumptions that are easy to understand and set up by professionals who wish to perform these tasks.
Resumo:
El objetivo del presente Trabajo Fin de Máster es estudiar la precisión de los resultados obtenidos en la medición del coeficiente de absorción acústica en tubo de impedancia, según la norma UNE-EN ISO 10534-2 “Determinación del coeficiente de absorción acústica y de la impedancia acústica en tubos de impedancia. Parte 2: Método de la función de transferencia”. En primer lugar, se han estudiado los fundamentos teóricos relacionados con el método de ensayo. A continuación, se ha detallado dicho método y se ha aplicado a un caso práctico con la instrumentación disponible en el laboratorio de Acústica de la Escuela. En relación a la precisión del método se ha analizado si la preparación e instalación de la muestra son causas de imprecisión. Para ello, se han realizado varios ensayos con dos tipos de materiales acústicos, con el fin de estudiar la dispersión entre los resultados que produce tanto el corte de la muestra, realizado en el proceso de confección, como su colocación en el tubo de impedancia. Además, se ha estudiado si desviaciones en la medida de la temperatura y de la distancia entre los micrófonos influyen en los valores del coeficiente de absorción acústica medido y de su incertidumbre asociada. Puesto que el resultado de un ensayo únicamente se halla completo cuando está acompañado de una declaración acerca de la incertidumbre de dicho resultado, en el presente trabajo se ha aplicado a este método de ensayo un procedimiento para estimar la incertidumbre empleando el método de Monte Carlo. ABSTRACT The objective of this project is studying the precision of the measurements of sound absorption coefficient in impedance tube, according to standard UNE-EN ISO 10534-2 “Determination of sound absorption coefficient and impedance in impedance tubes. Part 2: Transfer-function method”. Firstly, theoretical basis related to the test method have been studied. Furthermore, this method has been defined and applied to a particular case with the instrumentation available in the Acoustics laboratory of the College. In relation to the precision of the method, the preparation and installation of the test sample have been analyzed as possible causes of imprecision. For this purpose, two types of acoustic materials have been tested in order to study the deviation between the results produced by the cutting of the test sample and the collocation in impedance tube. In addition, it has been studied if deviations in the measurement of the temperature and distance between microphones may influence the sound absorption coefficient measured and its associated uncertainty. The test result must be accompanied by a statement of the uncertainty of the result. For this reason, in this project a procedure for estimating uncertainty of the result of this test method has been applied using the Monte Carlo method.
Resumo:
El objetivo del presente Trabajo Fin de Máster es llevar a cabo un estudio de las causas de la precisión de los resultados de las medidas del coeficiente de absorción acústica en cámara reverberante, siguiendo el procedimiento descrito en la norma UNE-EN ISO 354 “Medición de la absorción acústica en una cámara reverberante”. En primer lugar, se han estudiado los fundamentos teóricos relacionados con el método de ensayo. A continuación, se ha detallado dicho método y se ha aplicado a un caso práctico con la instrumentación disponible en el laboratorio de Acústica de la Escuela. Puesto que el resultado de un ensayo únicamente se halla completo cuando está acompañado de una declaración acerca de la incertidumbre de dicho resultado, en el presente trabajo se ha aplicado a este método de ensayo, tanto el enfoque clásico de la GUM, como el método de Monte Carlo, para evaluar la incertidumbre y comparar los resultados obtenidos mediante las dos metodologías. Para estudiar la influencia que produce en el campo sonoro de una cámara reverberante la introducción de un material absorbente en su interior, se realizó un estudio mediante trazado de rayos de la cámara reverberante del CEIS. ABSTRACT The objective of this Master's Thesis is to study the causes of accuracy of the measurements results of sound absorption coefficient in a reverberation room, according to the procedure of standard UNE-EN ISO 354 "Measurement of sound absorption in a reverberation room ". Firstly, theoretical basis related to the test method have been studied. Then, this method has been defined and applied to a particular case with the instrumentation available in the Acoustics laboratory of the College. The test result must be accompanied by a statement of the uncertainty of the result. For this reason, Monte Carlo method and GUM uncertainty framework have been applied in this project for estimating uncertainty and comparing the results obtained by these two methodologies. In addition, a study was made in the reverberation room of CEIS, by ray tracing, to study the influence on the sound field produced by the introduction of an absorbent material inside of the room.
Resumo:
Synthetic Aperture Radar’s (SAR) are systems designed in the early 50’s that are capable of obtaining images of the ground using electromagnetic signals. Thus, its activity is not interrupted by adverse meteorological conditions or during the night, as it occurs in optical systems. The name of the system comes from the creation of a synthetic aperture, larger than the real one, by moving the platform that carries the radar (typically a plane or a satellite). It provides the same resolution as a static radar equipped with a larger antenna. As it moves, the radar keeps emitting pulses every 1/PRF seconds —the PRF is the pulse repetition frequency—, whose echoes are stored and processed to obtain the image of the ground. To carry out this process, the algorithm needs to make the assumption that the targets in the illuminated scene are not moving. If that is the case, the algorithm is able to extract a focused image from the signal. However, if the targets are moving, they get unfocused and/or shifted from their position in the final image. There are applications in which it is especially useful to have information about moving targets (military, rescue tasks,studyoftheflowsofwater,surveillanceofmaritimeroutes...).Thisfeatureiscalled Ground Moving Target Indicator (GMTI). That is why the study and the development of techniques capable of detecting these targets and placing them correctly in the scene is convenient. In this document, some of the principal GMTI algorithms used in SAR systems are detailed. A simulator has been created to test the features of each implemented algorithm on a general situation with moving targets. Finally Monte Carlo tests have been performed, allowing us to extract conclusions and statistics of each algorithm.
Resumo:
En la actualidad, la gestión de embalses para el control de avenidas se realiza, comúnmente, utilizando modelos de simulación. Esto se debe, principalmente, a su facilidad de uso en tiempo real por parte del operador de la presa. Se han desarrollado modelos de optimización de la gestión del embalse que, aunque mejoran los resultados de los modelos de simulación, su aplicación en tiempo real se hace muy difícil o simplemente inviable, pues está limitada al conocimiento de la avenida futura que entra al embalse antes de tomar la decisión de vertido. Por esta razón, se ha planteado el objetivo de desarrollar un modelo de gestión de embalses en avenidas que incorpore las ventajas de un modelo de optimización y que sea de fácil uso en tiempo real por parte del gestor de la presa. Para ello, se construyó un modelo de red Bayesiana que representa los procesos de la cuenca vertiente y del embalse y, que aprende de casos generados sintéticamente mediante un modelo hidrológico agregado y un modelo de optimización de la gestión del embalse. En una primera etapa, se generó un gran número de episodios sintéticos de avenida utilizando el método de Monte Carlo, para obtener las lluvias, y un modelo agregado compuesto de transformación lluvia- escorrentía, para obtener los hidrogramas de avenida. Posteriormente, se utilizaron las series obtenidas como señales de entrada al modelo de gestión de embalses PLEM, que optimiza una función objetivo de costes mediante programación lineal entera mixta, generando igual número de eventos óptimos de caudal vertido y de evolución de niveles en el embalse. Los episodios simulados fueron usados para entrenar y evaluar dos modelos de red Bayesiana, uno que pronostica el caudal de entrada al embalse, y otro que predice el caudal vertido, ambos en un horizonte de tiempo que va desde una a cinco horas, en intervalos de una hora. En el caso de la red Bayesiana hidrológica, el caudal de entrada que se elige es el promedio de la distribución de probabilidad de pronóstico. En el caso de la red Bayesiana hidráulica, debido al comportamiento marcadamente no lineal de este proceso y a que la red Bayesiana devuelve un rango de posibles valores de caudal vertido, se ha desarrollado una metodología para seleccionar un único valor, que facilite el trabajo del operador de la presa. Esta metodología consiste en probar diversas estrategias propuestas, que incluyen zonificaciones y alternativas de selección de un único valor de caudal vertido en cada zonificación, a un conjunto suficiente de episodios sintéticos. Los resultados de cada estrategia se compararon con el método MEV, seleccionándose las estrategias que mejoran los resultados del MEV, en cuanto al caudal máximo vertido y el nivel máximo alcanzado por el embalse, cualquiera de las cuales puede usarse por el operador de la presa en tiempo real para el embalse de estudio (Talave). La metodología propuesta podría aplicarse a cualquier embalse aislado y, de esta manera, obtener, para ese embalse particular, diversas estrategias que mejoran los resultados del MEV. Finalmente, a modo de ejemplo, se ha aplicado la metodología a una avenida sintética, obteniendo el caudal vertido y el nivel del embalse en cada intervalo de tiempo, y se ha aplicado el modelo MIGEL para obtener en cada instante la configuración de apertura de los órganos de desagüe que evacuarán el caudal. Currently, the dam operator for the management of dams uses simulation models during flood events, mainly due to its ease of use in real time. Some models have been developed to optimize the management of the reservoir to improve the results of simulation models. However, real-time application becomes very difficult or simply unworkable, because the decision to discharge depends on the unknown future avenue entering the reservoir. For this reason, the main goal is to develop a model of reservoir management at avenues that incorporates the advantages of an optimization model. At the same time, it should be easy to use in real-time by the dam manager. For this purpose, a Bayesian network model has been developed to represent the processes of the watershed and reservoir. This model learns from cases generated synthetically by a hydrological model and an optimization model for managing the reservoir. In a first stage, a large number of synthetic flood events was generated using the Monte Carlo method, for rain, and rain-added processing model composed of runoff for the flood hydrographs. Subsequently, the series obtained were used as input signals to the reservoir management model PLEM that optimizes a target cost function using mixed integer linear programming. As a result, many optimal discharge rate events and water levels in the reservoir levels were generated. The simulated events were used to train and test two models of Bayesian network. The first one predicts the flow into the reservoir, and the second predicts the discharge flow. They work in a time horizon ranging from one to five hours, in intervals of an hour. In the case of hydrological Bayesian network, the chosen inflow is the average of the probability distribution forecast. In the case of hydraulic Bayesian network the highly non-linear behavior of this process results on a range of possible values of discharge flow. A methodology to select a single value has been developed to facilitate the dam operator work. This methodology tests various strategies proposed. They include zoning and alternative selection of a single value in each discharge rate zoning from a sufficient set of synthetic episodes. The results of each strategy are compared with the MEV method. The strategies that improve the outcomes of MEV are selected and can be used by the dam operator in real time applied to the reservoir study case (Talave). The methodology could be applied to any single reservoir and, thus, obtain, for the particular reservoir, various strategies that improve results from MEV. Finally, the methodology has been applied to a synthetic flood, obtaining the discharge flow and the reservoir level in each time interval. The open configuration floodgates to evacuate the flow at each interval have been obtained applying the MIGEL model.