16 resultados para Stochastic PDE
em Universidad Politécnica de Madrid
Resumo:
Illumination uniformity of a spherical capsule directly driven by laser beams has been assessed numerically. Laser facilities characterized by ND = 12, 20, 24, 32, 48 and 60 directions of irradiation with associated a single laser beam or a bundle of NB laser beams have been considered. The laser beam intensity profile is assumed super-Gaussian and the calculations take into account beam imperfections as power imbalance and pointing errors. The optimum laser intensity profile, which minimizes the root-mean-square deviation of the capsule illumination, depends on the values of the beam imperfections. Assuming that the NB beams are statistically independents is found that they provide a stochastic homogenization of the laser intensity associated to the whole bundle, reducing the errors associated to the whole bundle by the factor , which in turn improves the illumination uniformity of the capsule. Moreover, it is found that the uniformity of the irradiation is almost the same for all facilities and only depends on the total number of laser beams Ntot = ND × NB.
Resumo:
Nanofabrication has allowed the development of new concepts such as magnetic logic and race-track memory, both of which are based on the displacement of magnetic domain walls on magnetic nanostripes. One of the issues that has to be solved before devices can meet the market demands is the stochastic behaviour of the domain wall movement in magnetic nanostripes. Here we show that the stochastic nature of the domain wall motion in permalloy nanostripes can be suppressed at very low fields (0.6-2.7 Oe). We also find different field regimes for this stochastic motion that match well with the domain wall propagation modes. The highest pinning probability is found around the precessional mode and, interestingly, it does not depend on the external field in this regime. These results constitute an experimental evidence of the intrinsic nature of the stochastic pinning of domain walls in soft magnetic nanostripes
Resumo:
In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.
Resumo:
Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.
Resumo:
We study the renormalization group flow of the average action of the stochastic Navier-Stokes equation with power-law forcing. Using Galilean invariance, we introduce a nonperturbative approximation adapted to the zero-frequency sector of the theory in the parametric range of the Hölder exponent 4−2 ɛ of the forcing where real-space local interactions are relevant. In any spatial dimension d, we observe the convergence of the resulting renormalization group flow to a unique fixed point which yields a kinetic energy spectrum scaling in agreement with canonical dimension analysis. Kolmogorov's −5/3 law is, thus, recovered for ɛ=2 as also predicted by perturbative renormalization. At variance with the perturbative prediction, the −5/3 law emerges in the presence of a saturation in the ɛ dependence of the scaling dimension of the eddy diffusivity at ɛ=3/2 when, according to perturbative renormalization, the velocity field becomes infrared relevant.
Resumo:
The aim of this study was to evaluate the sustainability of farm irrigation systems in the Cébalat district in northern Tunisia. It addressed the challenging topic of sustainable agriculture through a bio-economic approach linking a biophysical model to an economic optimisation model. A crop growth simulation model (CropSyst) was used to build a database to determine the relationships between agricultural practices, crop yields and environmental effects (salt accumulation in soil and leaching of nitrates) in a context of high climatic variability. The database was then fed into a recursive stochastic model set for a 10-year plan that allowed analysing the effects of cropping patterns on farm income, salt accumulation and nitrate leaching. We assumed that the long-term sustainability of soil productivity might be in conflict with farm profitability in the short-term. Assuming a discount rate of 10% (for the base scenario), the model closely reproduced the current system and allowed to predict the degradation of soil quality due to long-term salt accumulation. The results showed that there was more accumulation of salt in the soil for the base scenario than for the alternative scenario (discount rate of 0%). This result was induced by applying a higher quantity of water per hectare for the alternative as compared to a base scenario. The results also showed that nitrogen leaching is very low for the two discount rates and all climate scenarios. In conclusion, the results show that the difference in farm income between the alternative and base scenarios increases over time to attain 45% after 10 years.
Resumo:
This paper presents a new fault detection and isolation scheme for dealing with simultaneous additive and parametric faults. The new design integrates a system for additive fault detection based on Castillo and Zufiria, 2009 and a new parametric fault detection and isolation scheme inspired in Munz and Zufiria, 2008 . It is shown that the so far existing schemes do not behave correctly when both additive and parametric faults occur simultaneously; to solve the problem a new integrated scheme is proposed. Computer simulation results are presented to confirm the theoretical studies.
Resumo:
n this work, a mathematical unifying framework for designing new fault detection schemes in nonlinear stochastic continuous-time dynamical systems is developed. These schemes are based on a stochastic process, called the residual, which reflects the system behavior and whose changes are to be detected. A quickest detection scheme for the residual is proposed, which is based on the computed likelihood ratios for time-varying statistical changes in the Ornstein–Uhlenbeck process. Several expressions are provided, depending on a priori knowledge of the fault, which can be employed in a proposed CUSUM-type approximated scheme. This general setting gathers different existing fault detection schemes within a unifying framework, and allows for the definition of new ones. A comparative simulation example illustrates the behavior of the proposed schemes.
Resumo:
This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Resumo:
A hybrid Eulerian-Lagrangian approach is employed to simulate heavy particle dispersion in turbulent pipe flow. The mean flow is provided by the Eulerian simulations developed by mean of JetCode, whereas the fluid fluctuations seen by particles are prescribed by a stochastic differential equation based on normalized Langevin. The statistics of particle velocity are compared to LES data which contain detailed statistics of velocity for particles with diameter equal to 20.4 µm. The model is in good agreement with the LES data for axial mean velocity whereas rms of axial and radial velocities should be adjusted.
Resumo:
In this paper a new method for fault isolation in a class of continuous-time stochastic dynamical systems is proposed. The method is framed in the context of model-based analytical redundancy, consisting in the generation of a residual signal by means of a diagnostic observer, for its posterior analysis. Once a fault has been detected, and assuming some basic a priori knowledge about the set of possible failures in the plant, the isolation task is then formulated as a type of on-line statistical classification problem. The proposed isolation scheme employs in parallel different hypotheses tests on a statistic of the residual signal, one test for each possible fault. This isolation method is characterized by deriving for the unidimensional case, a sufficient isolability condition as well as an upperbound of the probability of missed isolation. Simulation examples illustrate the applicability of the proposed scheme.
Resumo:
This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.
Resumo:
In this paper, a computer-based tool is developed to analyze student performance along a given curriculum. The proposed software makes use of historical data to compute passing/failing probabilities and simulates future student academic performance based on stochastic programming methods (MonteCarlo) according to the specific university regulations. This allows to compute the academic performance rates for the specific subjects of the curriculum for each semester, as well as the overall rates (the set of subjects in the semester), which are the efficiency rate and the success rate. Additionally, we compute the rates for the Bachelors degree, which are the graduation rate measured as the percentage of students who finish as scheduled or taking an extra year and the efficiency rate (measured as the percentage of credits of the curriculum with respect to the credits really taken). In Spain, these metrics have been defined by the National Quality Evaluation and Accreditation Agency (ANECA). Moreover, the sensitivity of the performance metrics to some of the parameters of the simulator is analyzed using statistical tools (Design of Experiments). The simulator has been adapted to the curriculum characteristics of the Bachelor in Engineering Technologies at the Technical University of Madrid(UPM).
Quality-optimization algorithm based on stochastic dynamic programming for MPEG DASH video streaming
Resumo:
In contrast to traditional push-based protocols, adaptive streaming techniques like Dynamic Adaptive Streaming over HTTP (DASH) fix attention on the client, who dynamically requests different-quality portions of the content to cope with a limited and variable bandwidth but aiming at maximizing the quality perceived by the user. Since DASH adaptation logic at the client is not covered by the standard, we propose a solution based on Stochastic Dynamic Programming (SDP) techniques to find the optimal request policies that guarantee the users' Quality of Experience (QoE). Our algorithm is evaluated in a simulated streaming session and is compared with other adaptation approaches. The results show that our proposal outperforms them in terms of QoE, requesting higher qualities on average.