27 resultados para Stochastic Dominance
em Universidad Politécnica de Madrid
Resumo:
Introducing cover crops (CC) interspersed with intensively fertilized crops in rotation has the potential to reduce nitrate leaching. This paper evaluates various strategies involving CC between maize and compares the economic and environmental results with respect to a typical maize?fallow rotation. The comparison is performed through stochastic (Monte-Carlo) simulation models of farms? profits using probability distribution functions (pdfs) of yield and N fertilizer saving fitted with data collected from various field trials and pdfs of crop prices and the cost of fertilizer fitted from statistical sources. Stochastic dominance relationships are obtained to rank the most profitable strategies from a farm financial perspective. A two-criterion comparison scheme is proposed to rank alternative strategies based on farm profit and nitrate leaching levels, taking the baseline scenario as the maize?fallow rotation. The results show that when CC biomass is sold as forage instead of keeping it in the soil, greater profit and less leaching of nitrates are achieved than in the baseline scenario. While the fertilizer saving will be lower if CC is sold than if it is kept in the soil, the revenue obtained from the sale of the CC compensates for the reduced fertilizer savings. The results show that CC would perhaps provide a double dividend of greater profit and reduced nitrate leaching in intensive irrigated cropping systems in Mediterranean regions.
Resumo:
In multi-attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative?s intensity of dominance, known as dominance measuring methods. Different dominancemeasuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we useMonte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions orweights represented by fuzzy numbers.Moreover, dominance measuringmethod performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one.
Resumo:
Illumination uniformity of a spherical capsule directly driven by laser beams has been assessed numerically. Laser facilities characterized by ND = 12, 20, 24, 32, 48 and 60 directions of irradiation with associated a single laser beam or a bundle of NB laser beams have been considered. The laser beam intensity profile is assumed super-Gaussian and the calculations take into account beam imperfections as power imbalance and pointing errors. The optimum laser intensity profile, which minimizes the root-mean-square deviation of the capsule illumination, depends on the values of the beam imperfections. Assuming that the NB beams are statistically independents is found that they provide a stochastic homogenization of the laser intensity associated to the whole bundle, reducing the errors associated to the whole bundle by the factor , which in turn improves the illumination uniformity of the capsule. Moreover, it is found that the uniformity of the irradiation is almost the same for all facilities and only depends on the total number of laser beams Ntot = ND × NB.
Resumo:
Nanofabrication has allowed the development of new concepts such as magnetic logic and race-track memory, both of which are based on the displacement of magnetic domain walls on magnetic nanostripes. One of the issues that has to be solved before devices can meet the market demands is the stochastic behaviour of the domain wall movement in magnetic nanostripes. Here we show that the stochastic nature of the domain wall motion in permalloy nanostripes can be suppressed at very low fields (0.6-2.7 Oe). We also find different field regimes for this stochastic motion that match well with the domain wall propagation modes. The highest pinning probability is found around the precessional mode and, interestingly, it does not depend on the external field in this regime. These results constitute an experimental evidence of the intrinsic nature of the stochastic pinning of domain walls in soft magnetic nanostripes
Resumo:
In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.
Resumo:
Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.
Resumo:
Quantitative descriptive analysis (QDA) is used to describe the nature and the intensity of sensory properties from a single evaluation of a product, whereas temporal dominance of sensation (TDS) is primarily used to identify dominant sensory properties over time. Previous studies with TDS have focused on model systems, but this is the first study to use a sequential approach, i.e. QDA then TDS in measuring sensory properties of a commercial product category, using the same set of trained assessors (n = 11). The main objectives of this study were to: (1) investigate the benefits of using a sequential approach of QDA and TDS and (2) to explore the impact of the sample composition on taste and flavour perceptions in blackcurrant squashes. The present study has proposed an alternative way of determining the choice of attributes for TDS measurement based on data obtained from previous QDA studies, where available. Both methods indicated that the flavour profile was primarily influenced by the level of dilution and complexity of sample composition combined with blackcurrant juice content. In addition, artificial sweeteners were found to modify the quality of sweetness and could also contribute to bitter notes. Using QDA and TDS in tandem was shown to be more beneficial than each just on its own enabling a more complete sensory profile of the products.
Resumo:
We study the renormalization group flow of the average action of the stochastic Navier-Stokes equation with power-law forcing. Using Galilean invariance, we introduce a nonperturbative approximation adapted to the zero-frequency sector of the theory in the parametric range of the Hölder exponent 4−2 ɛ of the forcing where real-space local interactions are relevant. In any spatial dimension d, we observe the convergence of the resulting renormalization group flow to a unique fixed point which yields a kinetic energy spectrum scaling in agreement with canonical dimension analysis. Kolmogorov's −5/3 law is, thus, recovered for ɛ=2 as also predicted by perturbative renormalization. At variance with the perturbative prediction, the −5/3 law emerges in the presence of a saturation in the ɛ dependence of the scaling dimension of the eddy diffusivity at ɛ=3/2 when, according to perturbative renormalization, the velocity field becomes infrared relevant.
Resumo:
The aim of this study was to evaluate the sustainability of farm irrigation systems in the Cébalat district in northern Tunisia. It addressed the challenging topic of sustainable agriculture through a bio-economic approach linking a biophysical model to an economic optimisation model. A crop growth simulation model (CropSyst) was used to build a database to determine the relationships between agricultural practices, crop yields and environmental effects (salt accumulation in soil and leaching of nitrates) in a context of high climatic variability. The database was then fed into a recursive stochastic model set for a 10-year plan that allowed analysing the effects of cropping patterns on farm income, salt accumulation and nitrate leaching. We assumed that the long-term sustainability of soil productivity might be in conflict with farm profitability in the short-term. Assuming a discount rate of 10% (for the base scenario), the model closely reproduced the current system and allowed to predict the degradation of soil quality due to long-term salt accumulation. The results showed that there was more accumulation of salt in the soil for the base scenario than for the alternative scenario (discount rate of 0%). This result was induced by applying a higher quantity of water per hectare for the alternative as compared to a base scenario. The results also showed that nitrogen leaching is very low for the two discount rates and all climate scenarios. In conclusion, the results show that the difference in farm income between the alternative and base scenarios increases over time to attain 45% after 10 years.
Resumo:
This paper presents a new fault detection and isolation scheme for dealing with simultaneous additive and parametric faults. The new design integrates a system for additive fault detection based on Castillo and Zufiria, 2009 and a new parametric fault detection and isolation scheme inspired in Munz and Zufiria, 2008 . It is shown that the so far existing schemes do not behave correctly when both additive and parametric faults occur simultaneously; to solve the problem a new integrated scheme is proposed. Computer simulation results are presented to confirm the theoretical studies.
Resumo:
Esta tesis realiza una contribución metodológica al problema de la gestión óptima de embalses hidroeléctricos durante eventos de avenidas, considerando un enfoque estocástico y multiobjetivo. Para ello se propone una metodología de evaluación de estrategias de laminación en un contexto probabilístico y multiobjetivo. Además se desarrolla un entorno dinámico de laminación en tiempo real con pronósticos que combina un modelo de optimización y algoritmos de simulación. Estas herramientas asisten a los gestores de las presas en la toma de decisión respecto de cuál es la operación más adecuada del embalse. Luego de una detallada revisión de la bibliografía, se observó que los trabajos en el ámbito de la gestión óptima de embalses en avenidas utilizan, en general, un número reducido de series de caudales o hidrogramas para caracterizar los posibles escenarios. Limitando el funcionamiento satisfactorio de un modelo determinado a situaciones hidrológicas similares. Por otra parte, la mayoría de estudios disponibles en este ámbito abordan el problema de la laminación en embalses multipropósito durante la temporada de avenidas, con varios meses de duración. Estas características difieren de la realidad de la gestión de embalses en España. Con los avances computacionales en materia de gestión de información en tiempo real, se observó una tendencia a la implementación de herramientas de operación en tiempo real con pronósticos para determinar la operación a corto plazo (involucrando el control de avenidas). La metodología de evaluación de estrategias propuesta en esta tesis se basa en determinar el comportamiento de éstas frente a un espectro de avenidas características de la solicitación hidrológica. Con ese fin, se combina un sistema de evaluación mediante indicadores y un entorno de generación estocástica de avenidas, obteniéndose un sistema implícitamente estocástico. El sistema de evaluación consta de tres etapas: caracterización, síntesis y comparación, a fin de poder manejar la compleja estructura de datos resultante y realizar la evaluación. En la primera etapa se definen variables de caracterización, vinculadas a los aspectos que se quieren evaluar (seguridad de la presa, control de inundaciones, generación de energía, etc.). Estas variables caracterizan el comportamiento del modelo para un aspecto y evento determinado. En la segunda etapa, la información de estas variables se sintetiza en un conjunto de indicadores, lo más reducido posible. Finalmente, la comparación se lleva a cabo a partir de la comparación de esos indicadores, bien sea mediante la agregación de dichos objetivos en un indicador único, o bien mediante la aplicación del criterio de dominancia de Pareto obteniéndose un conjunto de soluciones aptas. Esta metodología se aplicó para calibrar los parámetros de un modelo de optimización de embalse en laminación y su comparación con otra regla de operación, mediante el enfoque por agregación. Luego se amplió la metodología para evaluar y comparar reglas de operación existentes para el control de avenidas en embalses hidroeléctricos, utilizando el criterio de dominancia. La versatilidad de la metodología permite otras aplicaciones, tales como la determinación de niveles o volúmenes de seguridad, o la selección de las dimensiones del aliviadero entre varias alternativas. Por su parte, el entorno dinámico de laminación al presentar un enfoque combinado de optimización-simulación, permite aprovechar las ventajas de ambos tipos de modelos, facilitando la interacción con los operadores de las presas. Se mejoran los resultados respecto de los obtenidos con una regla de operación reactiva, aun cuando los pronósticos se desvían considerablemente del hidrograma real. Esto contribuye a reducir la tan mencionada brecha entre el desarrollo teórico y la aplicación práctica asociada a los modelos de gestión óptima de embalses. This thesis presents a methodological contribution to address the problem about how to operate a hydropower reservoir during floods in order to achieve an optimal management considering a multiobjective and stochastic approach. A methodology is proposed to assess the flood control strategies in a multiobjective and probabilistic framework. Additionally, a dynamic flood control environ was developed for real-time operation, including forecasts. This dynamic platform combines simulation and optimization models. These tools may assist to dam managers in the decision making process, regarding the most appropriate reservoir operation to be implemented. After a detailed review of the bibliography, it was observed that most of the existing studies in the sphere of flood control reservoir operation consider a reduce number of hydrographs to characterize the reservoir inflows. Consequently, the adequate functioning of a certain strategy may be limited to similar hydrologic scenarios. In the other hand, most of the works in this context tackle the problem of multipurpose flood control operation considering the entire flood season, lasting some months. These considerations differ from the real necessity in the Spanish context. The implementation of real-time reservoir operation is gaining popularity due to computational advances and improvements in real-time data management. The methodology proposed in this thesis for assessing the strategies is based on determining their behavior for a wide range of floods, which are representative of the hydrological forcing of the dam. An evaluation algorithm is combined with a stochastic flood generation system to obtain an implicit stochastic analysis framework. The evaluation system consists in three stages: characterizing, synthesizing and comparing, in order to handle the complex structure of results and, finally, conduct the evaluation process. In the first stage some characterization variables are defined. These variables should be related to the different aspects to be evaluated (such as dam safety, flood protection, hydropower, etc.). Each of these variables characterizes the behavior of a certain operating strategy for a given aspect and event. In the second stage this information is synthesized obtaining a reduced group of indicators or objective functions. Finally, the indicators are compared by means of an aggregated approach or by a dominance criterion approach. In the first case, a single optimum solution may be achieved. However in the second case, a set of good solutions is obtained. This methodology was applied for calibrating the parameters of a flood control model and to compare it with other operating policy, using an aggregated method. After that, the methodology was extent to assess and compared some existing hydropower reservoir flood control operation, considering the Pareto approach. The versatility of the method allows many other applications, such as determining the safety levels, defining the spillways characteristics, among others. The dynamic framework for flood control combines optimization and simulation models, exploiting the advantages of both techniques. This facilitates the interaction between dam operators and the model. Improvements are obtained applying this system when compared with a reactive operating policy, even if the forecasts deviate significantly from the observed hydrograph. This approach contributes to reduce the gap between the theoretical development in the field of reservoir management and its practical applications.
Resumo:
n this work, a mathematical unifying framework for designing new fault detection schemes in nonlinear stochastic continuous-time dynamical systems is developed. These schemes are based on a stochastic process, called the residual, which reflects the system behavior and whose changes are to be detected. A quickest detection scheme for the residual is proposed, which is based on the computed likelihood ratios for time-varying statistical changes in the Ornstein–Uhlenbeck process. Several expressions are provided, depending on a priori knowledge of the fault, which can be employed in a proposed CUSUM-type approximated scheme. This general setting gathers different existing fault detection schemes within a unifying framework, and allows for the definition of new ones. A comparative simulation example illustrates the behavior of the proposed schemes.
Resumo:
In this work we investigated whether there is a relationship between dominant behaviour of dialogue participants and their verbal intelligence. The analysis is based on a corpus containing 56 dialogues and verbal intelligence scores of the test persons. All the dialogues were divided into three groups: H-H is a group of dialogues between higher verbal intelligence participants, L-L is a group of dialogues between lower verbal intelligence participant and L-H is a group of all the other dialogues. The dominance scores of the dialogue partners from each group were analysed. The analysis showed that differences between dominance scores and verbal intelligence coefficients for L-L were positively correlated. Verbal intelligence scores of the test persons were compared to other features that may reflect dominant behaviour. The analysis showed that number of interruptions, long utterances, times grabbed the floor, influence diffusion model, number of agreements and several acoustic features may be related to verbal intelligence. These features were used for the automatic classification of the dialogue partners into two groups (lower and higher verbal intelligence participants); the achieved accuracy was 89.36%.
Resumo:
This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.