943 resultados para 2ND PARAMETER
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Resumo:
Related with the detection of weak magnetic fields, the anisotropic magnetoresistive (AMR) effect is widely utilized in sensor applications. Exchange coupling between an antiferromagnet (AF) and the ferromagnet (FM) has been known as a significant parameter in the field sensitivity of magnetoresistance because of pinning effects on magnetic domain in FM layer by the bias field in AF. In this work we have studied the thermal evolution of the magnetization reversal processes in nanocrystalline exchange biased Ni80Fe20/Ni-O bilayers with large training effects and we report the anisotropic magnetoresistance ratio arising from field orientation in the bilayer.
Resumo:
Flash floods are of major relevance in natural disaster management in the Mediterranean region. In many cases, the damaging effects of flash floods can be mitigated by adequate management of flood control reservoirs. This requires the development of suitable models for optimal operation of reservoirs. A probabilistic methodology for calibrating the parameters of a reservoir flood control model (RFCM) that takes into account the stochastic variability of flood events is presented. This study addresses the crucial problem of operating reservoirs during flood events, considering downstream river damages and dam failure risk as conflicting operation criteria. These two criteria are aggregated into a single objective of total expected damages from both the maximum released flows and stored volumes (overall risk index). For each selected parameter set the RFCM is run under a wide range of hydrologic loads (determined through Monte Carlo simulation). The optimal parameter set is obtained through the overall risk index (balanced solution) and then compared with other solutions of the Pareto front. The proposed methodology is implemented at three different reservoirs in the southeast of Spain. The results obtained show that the balanced solution offers a good compromise between the two main objectives of reservoir flood control management
Resumo:
Correct modeling of the equivalent circuits regarding solar cell and panels is today an essential tool for power optimization. However, the parameter extraction of those circuits is still a quite difficult task that normally requires both experimental data and calculation procedures, generally not available to the normal user. This paper presents a new analytical method that easily calculates the equivalent circuit parameters from the data that manufacturers usually provide. The analytical approximation is based on a new methodology, since methods developed until now to obtain the aforementioned equivalent circuit parameters from manufacturer's data have always been numerical or heuristic. Results from the present method are as accurate as the ones resulting from other more complex (numerical) existing methods in terms of calculation process and resources.
Resumo:
Resumen: Descripción: retrato de niño de tres cuartos de figura mirando de frente
Resumo:
Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.
Resumo:
Wave energy conversion has an essential difference from other renewable energies since the dependence between the devices design and the energy resource is stronger. Dimensioning is therefore considered a key stage when a design project of Wave Energy Converters (WEC) is undertaken. Location, WEC concept, Power Take-Off (PTO) type, control strategy and hydrodynamic resonance considerations are some of the critical aspects to take into account to achieve a good performance. The paper proposes an automatic dimensioning methodology to be accomplished at the initial design project stages and the following elements are described to carry out the study: an optimization design algorithm, its objective functions and restrictions, a PTO model, as well as a procedure to evaluate the WEC energy production. After that, a parametric analysis is included considering different combinations of the key parameters previously introduced. A variety of study cases are analysed from the point of view of energy production for different design-parameters and all of them are compared with a reference case. Finally, a discussion is presented based on the results obtained, and some recommendations to face the WEC design stage are given.
Resumo:
The reason that the indefinite exponential increase in the number of one’s ancestors does not take place is found in the law of sibling interference, which can be expressed by the following simple equation:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}\begin{matrix}{\mathit{N}}_{{\mathit{n}}} \enskip & \\ {\mathit{{\blacksquare}}} \enskip & \\ {\mathit{ASZ}} \enskip & \end{matrix} {\mathrm{\hspace{.167em}{\times}\hspace{.167em}2\hspace{.167em}=\hspace{.167em}}}{\mathit{N_{n+1},}}\end{equation*}\end{document} where Nn is the number of ancestors in the nth generation, ASZ is the average sibling size of these ancestors, and Nn+1 is the number of ancestors in the next older generation (n + 1). Accordingly, the exponential increase in the number of one’s ancestors is an initial anomaly that occurs while ASZ remains at 1. Once ASZ begins to exceed 1, the rate of increase in the number of ancestors is progressively curtailed, falling further and further behind the exponential increase rate. Eventually, ASZ reaches 2, and at that point, the number of ancestors stops increasing for two generations. These two generations, named AN SA and AN SA + 1, are the most critical in the ancestry, for one’s ancestors at that point come to represent all the progeny-produced adults of the entire ancestral population. Thereafter, the fate of one’s ancestors becomes the fate of the entire population. If the population to which one belongs is a successful, slowly expanding one, the number of ancestors would slowly decline as you move toward the remote past. This is because ABZ would exceed 2. Only when ABZ is less than 2 would the number of ancestors increase beyond the AN SA and AN SA + 1 generations. Since the above is an indication of a failing population on the way to extinction, there had to be the previous AN SA involving a far greater number of individuals for such a population. Simulations indicated that for a member of a continuously successful population, the AN SA ancestors might have numbered as many as 5.2 million, the AN SA generation being the 28th generation in the past. However, because of the law of increasingly irrelevant remote ancestors, only a very small fraction of the AN SA ancestors would have left genetic traces in the genome of each descendant of today.
Resumo:
The susceptibility of clay bearing rocks to weathering (erosion and/or differential degradation) is known to influence the stability of heterogeneous slopes. However, not all of these rocks show the same behaviour, as there are considerable differences in the speed and type of weathering observed. As such, it is very important to establish relationships between behaviour quantified in a laboratory environment with that observed in the field. The slake durability test is the laboratory test most commonly used to evaluate the relationship between slaking behaviour and rock durability. However, it has a number of disadvantages; it does not account for changes in shape and size in fragments retained in the 2 mm sieve, nor does its most commonly used index (Id2) accurately reflect weathering behaviour observed in the field. The main aim of this paper is to propose a simple methodology for characterizing the weathering behaviour of carbonate lithologies that outcrop in heterogeneous rock masses (such as Flysch slopes), for use by practitioners. To this end, the Potential Degradation Index (PDI) is proposed. This is calculated using the fragment size distribution curves taken from material retained in the drum after each cycle of the slake durability test. The number of slaking cycles has also been increased to five. Through laboratory testing of 117 samples of carbonate rocks, extracted from strata in selected slopes, 6 different rock types were established based on their slaking behaviour, and corresponding to the different weathering behaviours observed in the field.