915 resultados para Stochastic frontier
Resumo:
Este artículo propone un método para llevar a cabo la calibración de las familias de discontinuidades en macizos rocosos. We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.
Resumo:
We study the renormalization group flow of the average action of the stochastic Navier-Stokes equation with power-law forcing. Using Galilean invariance, we introduce a nonperturbative approximation adapted to the zero-frequency sector of the theory in the parametric range of the Hölder exponent 4−2 ɛ of the forcing where real-space local interactions are relevant. In any spatial dimension d, we observe the convergence of the resulting renormalization group flow to a unique fixed point which yields a kinetic energy spectrum scaling in agreement with canonical dimension analysis. Kolmogorov's −5/3 law is, thus, recovered for ɛ=2 as also predicted by perturbative renormalization. At variance with the perturbative prediction, the −5/3 law emerges in the presence of a saturation in the ɛ dependence of the scaling dimension of the eddy diffusivity at ɛ=3/2 when, according to perturbative renormalization, the velocity field becomes infrared relevant.
Resumo:
The aim of this study was to evaluate the sustainability of farm irrigation systems in the Cébalat district in northern Tunisia. It addressed the challenging topic of sustainable agriculture through a bio-economic approach linking a biophysical model to an economic optimisation model. A crop growth simulation model (CropSyst) was used to build a database to determine the relationships between agricultural practices, crop yields and environmental effects (salt accumulation in soil and leaching of nitrates) in a context of high climatic variability. The database was then fed into a recursive stochastic model set for a 10-year plan that allowed analysing the effects of cropping patterns on farm income, salt accumulation and nitrate leaching. We assumed that the long-term sustainability of soil productivity might be in conflict with farm profitability in the short-term. Assuming a discount rate of 10% (for the base scenario), the model closely reproduced the current system and allowed to predict the degradation of soil quality due to long-term salt accumulation. The results showed that there was more accumulation of salt in the soil for the base scenario than for the alternative scenario (discount rate of 0%). This result was induced by applying a higher quantity of water per hectare for the alternative as compared to a base scenario. The results also showed that nitrogen leaching is very low for the two discount rates and all climate scenarios. In conclusion, the results show that the difference in farm income between the alternative and base scenarios increases over time to attain 45% after 10 years.
Resumo:
This paper presents a new fault detection and isolation scheme for dealing with simultaneous additive and parametric faults. The new design integrates a system for additive fault detection based on Castillo and Zufiria, 2009 and a new parametric fault detection and isolation scheme inspired in Munz and Zufiria, 2008 . It is shown that the so far existing schemes do not behave correctly when both additive and parametric faults occur simultaneously; to solve the problem a new integrated scheme is proposed. Computer simulation results are presented to confirm the theoretical studies.
Resumo:
n this work, a mathematical unifying framework for designing new fault detection schemes in nonlinear stochastic continuous-time dynamical systems is developed. These schemes are based on a stochastic process, called the residual, which reflects the system behavior and whose changes are to be detected. A quickest detection scheme for the residual is proposed, which is based on the computed likelihood ratios for time-varying statistical changes in the Ornstein–Uhlenbeck process. Several expressions are provided, depending on a priori knowledge of the fault, which can be employed in a proposed CUSUM-type approximated scheme. This general setting gathers different existing fault detection schemes within a unifying framework, and allows for the definition of new ones. A comparative simulation example illustrates the behavior of the proposed schemes.
Resumo:
This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Resumo:
A hybrid Eulerian-Lagrangian approach is employed to simulate heavy particle dispersion in turbulent pipe flow. The mean flow is provided by the Eulerian simulations developed by mean of JetCode, whereas the fluid fluctuations seen by particles are prescribed by a stochastic differential equation based on normalized Langevin. The statistics of particle velocity are compared to LES data which contain detailed statistics of velocity for particles with diameter equal to 20.4 µm. The model is in good agreement with the LES data for axial mean velocity whereas rms of axial and radial velocities should be adjusted.
Resumo:
In this paper a new method for fault isolation in a class of continuous-time stochastic dynamical systems is proposed. The method is framed in the context of model-based analytical redundancy, consisting in the generation of a residual signal by means of a diagnostic observer, for its posterior analysis. Once a fault has been detected, and assuming some basic a priori knowledge about the set of possible failures in the plant, the isolation task is then formulated as a type of on-line statistical classification problem. The proposed isolation scheme employs in parallel different hypotheses tests on a statistic of the residual signal, one test for each possible fault. This isolation method is characterized by deriving for the unidimensional case, a sufficient isolability condition as well as an upperbound of the probability of missed isolation. Simulation examples illustrate the applicability of the proposed scheme.
Resumo:
This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.
Resumo:
La investigación realizada en este trabajo de tesis se ha centrado en el estudio de la generación, anclaje y desenganche de paredes de dominio magnético en nanohilos de permalloy con defectos controlados. Las últimas tecnologías de nanofabricación han abierto importantes líneas de investigación centradas en el estudio del movimiento de paredes de dominio magnético, gracias a su potencial aplicación en memorias magnéticas del futuro. En el 2004, Stuart Parkin de IBM introdujo un concepto innovador, el dispositivo “Racetrack”, basado en un nanohilo ferromagnético donde los dominios de imanación representan los "bits" de información. La frontera entre dominios, ie pared magnética, se moverían en una situación ideal por medio de transferencia de espín de una corriente polarizada. Se anclan en determinadas posiciones gracias a pequeños defectos o constricciones de tamaño nanométrico fabricados por litografía electrónica. El éxito de esta idea se basa en la generación, anclaje y desenganche de las paredes de dominio de forma controlada y repetitiva, tanto para la lectura como para la escritura de los bits de información. Slonczewski en 1994 muestra que la corriente polarizada de espín puede transferir momento magnético a la imanación local y así mover paredes por transferencia de espín y no por el campo creado por la corriente. Desde entonces muchos grupos de investigación de todo el mundo trabajan en optimizar las condiciones de transferencia de espín para mover paredes de dominio. La fracción de electrones polarizados que viaja en un hilo ferromagnético es considerablemente pequeña, así hoy por hoy la corriente necesaria para mover una pared magnética por transferencia de espín es superior a 1 107 A/cm2. Una densidad de corriente tan elevada no sólo tiene como consecuencia una importante degradación del dispositivo sino también se observan importantes efectos relacionados con el calentamiento por efecto Joule inducido por la corriente. Otro de los problemas científico - tecnológicos a resolver es la diversidad de paredes de dominio magnético ancladas en el defecto. Los diferentes tipos de pared anclados en el defecto, su quiralidad o el campo o corriente necesarios para desenganchar la pared pueden variar dependiendo si el defecto posee dimensiones ligeramente diferentes o si la pared se ancla con un método distinto. Además, existe una componente estocástica presente tanto en la nucleación como en el proceso de anclaje y desenganche que por un lado puede ser debido a la naturaleza de la pared que viaja por el hilo a una determinada temperatura distinta de cero, así como a defectos inevitables en el proceso de fabricación. Esto constituye un gran inconveniente dado que según el tipo de pared es necesario aplicar distintos valores de corriente y/o campo para desenganchar la pared del defecto. Como se menciona anteriormente, para realizar de forma eficaz la lectura y escritura de los bits de información, es necesaria la inyección, anclaje y desenganche forma controlada y repetitiva. Esto implica generar, anclar y desenganchar las paredes de dominio siempre en las mismas condiciones, ie siempre a la misma corriente o campo aplicado. Por ello, en el primer capítulo de resultados de esta tesis estudiamos el anclaje y desenganche de paredes de dominio en defectos de seis formas distintas, cada uno, de dos profundidades diferentes. Hemos realizado un análisis estadístico en diferentes hilos, donde hemos estudiado la probabilidad de anclaje cada tipo de defecto y la dispersión en el valor de campo magnético aplicado necesario para desenganchar la pared. Luego, continuamos con el estudio de la nucleación de las paredes de dominio magnético con pulsos de corriente a través una linea adyacente al nanohilo. Estudiamos defectos de tres formas distintas e identificamos, en función del valor de campo magnético aplicado, los distintos tipos de paredes de dominio anclados en cada uno de ellos. Además, con la ayuda de este método de inyección que es rápido y eficaz, hemos sido capaces de generar y anclar un único tipo de pared minimizando el comportamiento estocástico de la pared mencionado anteriormente. En estas condiciones óptimas, hemos estudiado el desenganche de las paredes de dominio por medio de corriente polarizada en espín, donde hemos conseguido desenganchar la pared de forma controlada y repetitiva siempre para los mismos valores de corriente y campo magnético aplicados. Además, aplicando pulsos de corriente en distintas direcciones, estudiamos en base a su diferencia, la contribución térmica debido al efecto Joule. Los resultados obtenidos representan un importante avance hacia la explotación práctica de este tipo de dispositivos. ABSTRACT The research activity of this thesis was focused on the nucleation, pinning and depinning of magnetic domain walls (DWs) in notched permalloy nanowires. The access to nanofabrication techniques has boosted the number of applications based on magnetic domain walls (DWs) like memory devices. In 2004, Stuart Parkin at IBM, conceived an innovative concept, the “racetrack memory” based on a ferromagnetic nanowire were the magnetic domains constitute the “bits” of information. The frontier between those magnetic domains, ie magnetic domain wall, will move ideally assisted by a spin polarized current. DWs will pin at certain positions due to artificially created pinning sites or “notches” fabricated with ebeam lithography. The success of this idea relies on the careful and predictable control on DW nucleation and a defined pinning-depinning process in order to read and write the bits of information. Sloncsewski in 1994 shows that a spin polarized current can transfer magnetic moment to the local magnetization to move the DWs instead of the magnetic field created by the current. Since then many research groups worldwide have been working on optimizing the conditions for the current induced DW motion due to the spin transfer effect. The fraction of spin polarized electrons traveling through a ferromagnetic nanowire is considerably small, so nowadays the current density required to move a DW by STT exceeds 1 107 A/cm2. A high current density not only can produce a significant degradation of the device but also important effects related to Joule heating were also observed . There are other scientific and technological issues to solve regarding the diversity of DWs states pinned at the notch. The types of DWs pinned, their chirality or their characteristic depinning current or field, may change if the notch has slightly different dimensions, the stripe has different thickness or even if the DW is pinned by a different procedure. Additionally, there is a stochastic component in both the injection of the DW and in its pinning-depinning process, which may be partly intrinsic to the nature of the travelling DW at a non-zero temperature and partly due to the unavoidable defects introduced during the nano-fabrication process. This constitutes an important inconvenient because depending on the DW type different values of current of magnetic field need to be applied in order to depin a DW from the notch. As mentioned earlier, in order to write and read the bits of information accurately, a controlled reproducible and predictable pinning- depinning process is required. This implies to nucleate, pin and depin always at the same applied magnetic field or current. Therefore, in the first chapter of this thesis we studied the pinning and depinning of DW in six different notch shapes and two depths. An statistical analysis was conducted in order to determine which notch type performed best in terms of pinning probability and the dispersion measured in the magnetic field necessary to depin the magnetic DWs. Then, we continued studying the nucleation of DWs with nanosecond current pulses by an adjacent conductive stripe. We studied the conditions for DW injection that allow a selective pinning of the different types of DWs in Permalloy nanostripes with 3 different notch shapes. Furthermore, with this injection method, which has proven to be fast and reliable, we manage to nucleate only one type of DW avoiding its stochastic behavior mentioned earlier. Having achieved this optimized conditions we studied current induced depinning where we also achieved a controlled and reproducible depinning process at always the same applied current and magnetic field. Additionally, changing the pulse polarity we studied the joule heating contribution in a current induced depinning process. The results obtained represent an important step towards the practical exploitation of these devices.
Resumo:
In this paper, a computer-based tool is developed to analyze student performance along a given curriculum. The proposed software makes use of historical data to compute passing/failing probabilities and simulates future student academic performance based on stochastic programming methods (MonteCarlo) according to the specific university regulations. This allows to compute the academic performance rates for the specific subjects of the curriculum for each semester, as well as the overall rates (the set of subjects in the semester), which are the efficiency rate and the success rate. Additionally, we compute the rates for the Bachelors degree, which are the graduation rate measured as the percentage of students who finish as scheduled or taking an extra year and the efficiency rate (measured as the percentage of credits of the curriculum with respect to the credits really taken). In Spain, these metrics have been defined by the National Quality Evaluation and Accreditation Agency (ANECA). Moreover, the sensitivity of the performance metrics to some of the parameters of the simulator is analyzed using statistical tools (Design of Experiments). The simulator has been adapted to the curriculum characteristics of the Bachelor in Engineering Technologies at the Technical University of Madrid(UPM).
Quality-optimization algorithm based on stochastic dynamic programming for MPEG DASH video streaming
Resumo:
In contrast to traditional push-based protocols, adaptive streaming techniques like Dynamic Adaptive Streaming over HTTP (DASH) fix attention on the client, who dynamically requests different-quality portions of the content to cope with a limited and variable bandwidth but aiming at maximizing the quality perceived by the user. Since DASH adaptation logic at the client is not covered by the standard, we propose a solution based on Stochastic Dynamic Programming (SDP) techniques to find the optimal request policies that guarantee the users' Quality of Experience (QoE). Our algorithm is evaluated in a simulated streaming session and is compared with other adaptation approaches. The results show that our proposal outperforms them in terms of QoE, requesting higher qualities on average.
Resumo:
The operating theatres are the engine of the hospitals; proper management of the operating rooms and its staff represents a great challenge for managers and its results impact directly in the budget of the hospital. This work presents a MILP model for the efficient schedule of multiple surgeries in Operating Rooms (ORs) during a working day. This model considers multiple surgeons and ORs and different types of surgeries. Stochastic strategies are also implemented for taking into account the uncertain in surgery durations (pre-incision, incision, post-incision times). In addition, a heuristic-based methods and a MILP decomposition approach is proposed for solving large-scale ORs scheduling problems in computational efficient way. All these computer-aided strategies has been implemented in AIMMS, as an advanced modeling and optimization software, developing a user friendly solution tool for the operating room management under uncertainty.
Resumo:
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.