27 resultados para DISTRIBUTION MODELS
Resumo:
A simplified CFD wake model based on the actuator disk concept is used to simulate the wind turbine, represented by a disk upon which a distribution of forces, defined as axial momentum sources, are applied on the incoming non-uniform flow. The rotor is supposed to be uniformly loaded, with the exerted forces function of the incident wind speed, the thrust coefficient and the rotor diameter. The model is tested under different parameterizations of turbulence models and validated through experimental measurements downwind of a wind turbine in terms of wind speed deficit and turbulence intensity.
Resumo:
This paper focuses on the general problem of coordinating of multi-robot systems, more specifically, it addresses the self-election of heterogeneous and specialized tasks by autonomous robots. In this regard, it has proposed experimenting with two different techniques based chiefly on selforganization and emergence biologically inspired, by applying response threshold models as well as ant colony optimization. Under this approach it can speak of multi-tasks selection instead of multi-tasks allocation, that means, as the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. It has evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.
Resumo:
La demanda de contenidos de vídeo ha aumentado rápidamente en los últimos años como resultado del gran despliegue de la TV sobre IP (IPTV) y la variedad de servicios ofrecidos por los operadores de red. Uno de los servicios que se ha vuelto especialmente atractivo para los clientes es el vídeo bajo demanda (VoD) en tiempo real, ya que ofrece una transmisión (streaming) inmediata de gran variedad de contenidos de vídeo. El precio que los operadores tienen que pagar por este servicio es el aumento del tráfico en las redes, que están cada vez más congestionadas debido a la mayor demanda de contenidos de VoD y al aumento de la calidad de los propios contenidos de vídeo. Así, uno de los principales objetivos de esta tesis es encontrar soluciones que reduzcan el tráfico en el núcleo de la red, manteniendo la calidad del servicio en el nivel adecuado y reduciendo el coste del tráfico. La tesis propone un sistema jerárquico de servidores de streaming en el que se ejecuta un algoritmo para la ubicación óptima de los contenidos de acuerdo con el comportamiento de los usuarios y el estado de la red. Debido a que cualquier algoritmo óptimo de distribución de contenidos alcanza un límite en el que no se puede llegar a nuevas mejoras, la inclusión de los propios clientes del servicio (los peers) en el proceso de streaming puede reducir aún más el tráfico de red. Este proceso se logra aprovechando el control que el operador tiene en las redes de gestión privada sobre los equipos receptores (Set-Top Box) ubicados en las instalaciones de los clientes. El operador se reserva cierta capacidad de almacenamiento y streaming de los peers para almacenar los contenidos de vídeo y para transmitirlos a otros clientes con el fin de aliviar a los servidores de streaming. Debido a la incapacidad de los peers para sustituir completamente a los servidores de streaming, la tesis propone un sistema de streaming asistido por peers. Algunas de las cuestiones importantes que se abordan en la tesis son saber cómo los parámetros del sistema y las distintas distribuciones de los contenidos de vídeo en los peers afectan al rendimiento general del sistema. Para dar respuesta a estas preguntas, la tesis propone un modelo estocástico preciso y flexible que tiene en cuenta parámetros como las capacidades de enlace de subida y de almacenamiento de los peers, el número de peers, el tamaño de la biblioteca de contenidos de vídeo, el tamaño de los contenidos y el esquema de distribución de contenidos para estimar los beneficios del streaming asistido por los peers. El trabajo también propone una versión extendida del modelo matemático mediante la inclusión de la probabilidad de fallo de los peers y su tiempo de recuperación en el conjunto de parámetros del modelo. Estos modelos se utilizan como una herramienta para la realización de exhaustivos análisis del sistema de streaming de VoD asistido por los peers para la amplia gama de parámetros definidos en los modelos. Abstract The demand of video contents has rapidly increased in the past years as a result of the wide deployment of IPTV and the variety of services offered by the network operators. One of the services that has especially become attractive to the customers is real-time Video on Demand (VoD) because it offers an immediate streaming of a large variety of video contents. The price that the operators have to pay for this convenience is the increased traffic in the networks, which are becoming more congested due to the higher demand for VoD contents and the increased quality of the videos. Therefore, one of the main objectives of this thesis is finding solutions that would reduce the traffic in the core of the network, keeping the quality of service on satisfactory level and reducing the traffic cost. The thesis proposes a system of hierarchical structure of streaming servers that runs an algorithm for optimal placement of the contents according to the users’ behavior and the state of the network. Since any algorithm for optimal content distribution reaches a limit upon which no further improvements can be made, including service customers themselves (the peers) in the streaming process can further reduce the network traffic. This process is achieved by taking advantage of the control that the operator has in the privately managed networks over the Set-Top Boxes placed at the clients’ premises. The operator reserves certain storage and streaming capacity on the peers to store the video contents and to stream them to the other clients in order to alleviate the streaming servers. Because of the inability of the peers to completely substitute the streaming servers, the thesis proposes a system for peer-assisted streaming. Some of the important questions addressed in the thesis are how the system parameters and the various distributions of the video contents on the peers would impact the overall system performance. In order to give answers to these questions, the thesis proposes a precise and flexible stochastic model that takes into consideration parameters like uplink and storage capacity of the peers, number of peers, size of the video content library, size of contents and content distribution scheme to estimate the benefits of the peer-assisted streaming. The work also proposes an extended version of the mathematical model by including the failure probability of the peers and their recovery time in the set of parameters. These models are used as tools for conducting thorough analyses of the peer-assisted system for VoD streaming for the wide range of defined parameters.
Resumo:
En los últimos años ha habido una fuerte tendencia a disminuir las emisiones de CO2 y su negativo impacto medioambiental. En la industria del transporte, reducir el peso de los vehículos aparece como la mejor opción para alcanzar este objetivo. Las aleaciones de Mg constituyen un material con gran potencial para el ahorro de peso. Durante la última década se han realizado muchos esfuerzos encaminados a entender los mecanismos de deformación que gobiernan la plasticidad de estos materiales y así, las aleaciones de Mg de colada inyectadas a alta presión y forjadas son todavía objeto de intensas campañas de investigación. Es ahora necesario desarrollar modelos que contemplen la complejidad inherente de los procesos de deformación de éstos. Esta tesis doctoral constituye un intento de entender mejor la relación entre la microestructura y el comportamiento mecánico de aleaciones de Mg, y dará como resultado modelos de policristales capaces de predecir propiedades macro- y microscópicas. La deformación plástica de las aleaciones de Mg está gobernada por una combinación de mecanismos de deformación característicos de la estructura cristalina hexagonal, que incluye el deslizamiento cristalográfico en planos basales, prismáticos y piramidales, así como el maclado. Las aleaciones de Mg de forja presentan texturas fuertes y por tanto los mecanismos de deformación activos dependen de la orientación de la carga aplicada. En este trabajo se ha desarrollado un modelo de plasticidad cristalina por elementos finitos con el objetivo de entender el comportamiento macro- y micromecánico de la aleación de Mg laminada AZ31 (Mg-3wt.%Al-1wt.%Zn). Este modelo, que incorpora el maclado y tiene en cuenta el endurecimiento por deformación debido a las interacciones dislocación-dislocación, dislocación-macla y macla-macla, predice exitosamente las actividades de los distintos mecanismos de deformación y la evolución de la textura con la deformación. Además, se ha llevado a cabo un estudio que combina difracción de electrones retrodispersados en tres dimensiones y modelización para investigar el efecto de los límites de grano en la propagación del maclado en el mismo material. Ambos, experimentos y simulaciones, confirman que el ángulo de desorientación tiene una influencia decisiva en la propagación del maclado. Se ha observado que los efectos no-Schmid, esto es, eventos de deformación plástica que no cumplen la ley de Schmid con respecto a la carga aplicada, no tienen lugar en la vecindad de los límites de baja desorientación y se hacen más frecuentes a medida que la desorientación aumenta. Esta investigación también prueba que la morfología de las maclas está altamente influenciada por su factor de Schmid. Es conocido que los procesos de colada suelen dar lugar a la formación de microestructuras con una microporosidad elevada, lo cuál afecta negativamente a sus propiedades mecánicas. La aplicación de presión hidrostática después de la colada puede reducir la porosidad y mejorar las propiedades aunque es poco conocido su efecto en el tamaño y morfología de los poros. En este trabajo se ha utilizado un enfoque mixto experimentalcomputacional, basado en tomografía de rayos X, análisis de imagen y análisis por elementos finitos, para la determinación de la distribución tridimensional (3D) de la porosidad y de la evolución de ésta con la presión hidrostática en la aleación de Mg AZ91 (Mg- 9wt.%Al-1wt.%Zn) colada por inyección a alta presión. La distribución real de los poros en 3D obtenida por tomografía se utilizó como input para las simulaciones por elementos finitos. Los resultados revelan que la aplicación de presión tiene una influencia significativa tanto en el cambio de volumen como en el cambio de forma de los poros que han sido cuantificados con precisión. Se ha observado que la reducción del tamaño de éstos está íntimamente ligada con su volumen inicial. En conclusión, el modelo de plasticidad cristalina propuesto en este trabajo describe con éxito los mecanismos intrínsecos de la deformación de las aleaciones de Mg a escalas meso- y microscópica. Más especificamente, es capaz de capturar las activadades del deslizamiento cristalográfico y maclado, sus interacciones, así como los efectos en la porosidad derivados de los procesos de colada. ---ABSTRACT--- The last few years have seen a growing effort to reduce CO2 emissions and their negative environmental impact. In the transport industry more specifically, vehicle weight reduction appears as the most straightforward option to achieve this objective. To this end, Mg alloys constitute a significant weight saving material alternative. Many efforts have been devoted over the last decade to understand the main mechanisms governing the plasticity of these materials and, despite being already widely used, high pressure die-casting and wrought Mg alloys are still the subject of intense research campaigns. Developing models that can contemplate the complexity inherent to the deformation of Mg alloys is now timely. This PhD thesis constitutes an attempt to better understand the relationship between the microstructure and the mechanical behavior of Mg alloys, as it will result in the design of polycrystalline models that successfully predict macro- and microscopic properties. Plastic deformation of Mg alloys is driven by a combination of deformation mechanisms specific to their hexagonal crystal structure, namely, basal, prismatic and pyramidal dislocation slip as well as twinning. Wrought Mg alloys present strong textures and thus specific deformation mechanisms are preferentially activated depending on the orientation of the applied load. In this work a crystal plasticity finite element model has been developed in order to understand the macro- and micromechanical behavior of a rolled Mg AZ31 alloy (Mg-3wt.%Al-1wt.%Zn). The model includes twinning and accounts for slip-slip, slip-twin and twin-twin hardening interactions. Upon calibration and validation against experiments, the model successfully predicts the activity of the various deformation mechanisms and the evolution of the texture at different deformation stages. Furthermore, a combined three-dimensional electron backscatter diffraction and modeling approach has been adopted to investigate the effect of grain boundaries on twin propagation in the same material. Both experiments and simulations confirm that the misorientation angle has a critical influence on twin propagation. Non-Schmid effects, i.e. plastic deformation events that do not comply with the Schmid law with respect to the applied stress, are absent in the vicinity of low misorientation boundaries and become more abundant as misorientation angle increases. This research also proves that twin morphology is highly influenced by the Schmid factor. Finally, casting processes usually lead to the formation of significant amounts of gas and shrinkage microporosity, which adversely affect the mechanical properties. The application of hydrostatic pressure after casting can reduce the porosity and improve the properties but little is known about the effects on the casting’s pores size and morphology. In this work, an experimental-computational approach based on X-ray computed tomography, image analysis and finite element analysis is utilized for the determination of the 3D porosity distribution and its evolution with hydrostatic pressure in a high pressure diecast Mg AZ91 alloy (Mg-9wt.%Al-1wt.%Zn). The real 3D pore distribution obtained by tomography is used as input for the finite element simulations using an isotropic hardening law. The model is calibrated and validated against experimental stress-strain curves. The results reveal that the pressure treatment has a significant influence both on the volume and shape changes of individuals pores, which have been precisely quantified, and which are found to be related to the initial pore volume. In conclusion, the crystal plasticity model proposed in this work successfully describes the intrinsic deformation mechanisms of Mg alloys both at the mesoscale and the microscale. More specifically, it can capture slip and twin activities, their interactions, as well as the potential porosity effects arising from casting processes.
Resumo:
El principio de Teoría de Juegos permite desarrollar modelos estocásticos de patrullaje multi-robot para proteger infraestructuras criticas. La protección de infraestructuras criticas representa un gran reto para los países al rededor del mundo, principalmente después de los ataques terroristas llevados a cabo la década pasada. En este documento el termino infraestructura hace referencia a aeropuertos, plantas nucleares u otros instalaciones. El problema de patrullaje se define como la actividad de patrullar un entorno determinado para monitorear cualquier actividad o sensar algunas variables ambientales. En esta actividad, un grupo de robots debe visitar un conjunto de puntos de interés definidos en un entorno en intervalos de tiempo irregulares con propósitos de seguridad. Los modelos de partullaje multi-robot son utilizados para resolver este problema. Hasta el momento existen trabajos que resuelven este problema utilizando diversos principios matemáticos. Los modelos de patrullaje multi-robot desarrollados en esos trabajos representan un gran avance en este campo de investigación. Sin embargo, los modelos con los mejores resultados no son viables para aplicaciones de seguridad debido a su naturaleza centralizada y determinista. Esta tesis presenta cinco modelos de patrullaje multi-robot distribuidos e impredecibles basados en modelos matemáticos de aprendizaje de Teoría de Juegos. El objetivo del desarrollo de estos modelos está en resolver los inconvenientes presentes en trabajos preliminares. Con esta finalidad, el problema de patrullaje multi-robot se formuló utilizando conceptos de Teoría de Grafos, en la cual se definieron varios juegos en cada vértice de un grafo. Los modelos de patrullaje multi-robot desarrollados en este trabajo de investigación se han validado y comparado con los mejores modelos disponibles en la literatura. Para llevar a cabo tanto la validación como la comparación se ha utilizado un simulador de patrullaje y un grupo de robots reales. Los resultados experimentales muestran que los modelos de patrullaje desarrollados en este trabajo de investigación trabajan mejor que modelos de trabajos previos en el 80% de 150 casos de estudio. Además de esto, estos modelos cuentan con varias características importantes tales como distribución, robustez, escalabilidad y dinamismo. Los avances logrados con este trabajo de investigación dan evidencia del potencial de Teoría de Juegos para desarrollar modelos de patrullaje útiles para proteger infraestructuras. ABSTRACT Game theory principle allows to developing stochastic multi-robot patrolling models to protect critical infrastructures. Critical infrastructures protection is a great concern for countries around the world, mainly due to terrorist attacks in the last decade. In this document, the term infrastructures includes airports, nuclear power plants, and many other facilities. The patrolling problem is defined as the activity of traversing a given environment to monitoring any activity or sensing some environmental variables If this activity were performed by a fleet of robots, they would have to visit some places of interest of an environment at irregular intervals of time for security purposes. This problem is solved using multi-robot patrolling models. To date, literature works have been solved this problem applying various mathematical principles.The multi-robot patrolling models developed in those works represent great advances in this field. However, the models that obtain the best results are unfeasible for security applications due to their centralized and predictable nature. This thesis presents five distributed and unpredictable multi-robot patrolling models based on mathematical learning models derived from Game Theory. These multi-robot patrolling models aim at overcoming the disadvantages of previous work. To this end, the multi-robot patrolling problem was formulated using concepts of Graph Theory to represent the environment. Several normal-form games were defined at each vertex of a graph in this formulation. The multi-robot patrolling models developed in this research work have been validated and compared with best ranked multi-robot patrolling models in the literature. Both validation and comparison were preformed by using both a patrolling simulator and real robots. Experimental results show that the multirobot patrolling models developed in this research work improve previous ones in as many as 80% of 150 cases of study. Moreover, these multi-robot patrolling models rely on several features to highlight in security applications such as distribution, robustness, scalability, and dynamism. The achievements obtained in this research work validate the potential of Game Theory to develop patrolling models to protect infrastructures.
Resumo:
Services in smart environments pursue to increase the quality of people?s lives. The most important issues when developing this kind of environments is testing and validating such services. These tasks usually imply high costs and annoying or unfeasible real-world testing. In such cases, artificial societies may be used to simulate the smart environment (i.e. physical environment, equipment and humans). With this aim, the CHROMUBE methodology guides test engineers when modeling human beings. Such models reproduce behaviors which are highly similar to the real ones. Originally, these models are based on automata whose transitions are governed by random variables. Automaton?s structure and the probability distribution functions of each random variable are determined by a manual test and error process. In this paper, it is presented an alternative extension of this methodology which avoids the said manual process. It is based on learning human behavior patterns automatically from sensor data by using machine learning techniques. The presented approach has been tested on a real scenario, where this extension has given highly accurate human behavior models,
Resumo:
One of the most promising areas in which probabilistic graphical models have shown an incipient activity is the field of heuristic optimization and, in particular, in Estimation of Distribution Algorithms. Due to their inherent parallelism, different research lines have been studied trying to improve Estimation of Distribution Algorithms from the point of view of execution time and/or accuracy. Among these proposals, we focus on the so-called distributed or island-based models. This approach defines several islands (algorithms instances) running independently and exchanging information with a given frequency. The information sent by the islands can be either a set of individuals or a probabilistic model. This paper presents a comparative study for a distributed univariate Estimation of Distribution Algorithm and a multivariate version, paying special attention to the comparison of two alternative methods for exchanging information, over a wide set of parameters and problems ? the standard benchmark developed for the IEEE Workshop on Evolutionary Algorithms and other Metaheuristics for Continuous Optimization Problems of the ISDA 2009 Conference. Several analyses from different points of view have been conducted to analyze both the influence of the parameters and the relationships between them including a characterization of the configurations according to their behavior on the proposed benchmark.
Resumo:
Low-cost systems that can obtain a high-quality foreground segmentation almostindependently of the existing illumination conditions for indoor environments are verydesirable, especially for security and surveillance applications. In this paper, a novelforeground segmentation algorithm that uses only a Kinect depth sensor is proposedto satisfy the aforementioned system characteristics. This is achieved by combininga mixture of Gaussians-based background subtraction algorithm with a new Bayesiannetwork that robustly predicts the foreground/background regions between consecutivetime steps. The Bayesian network explicitly exploits the intrinsic characteristics ofthe depth data by means of two dynamic models that estimate the spatial and depthevolution of the foreground/background regions. The most remarkable contribution is thedepth-based dynamic model that predicts the changes in the foreground depth distributionbetween consecutive time steps. This is a key difference with regard to visible imagery,where the color/gray distribution of the foreground is typically assumed to be constant.Experiments carried out on two different depth-based databases demonstrate that theproposed combination of algorithms is able to obtain a more accurate segmentation of theforeground/background than other state-of-the art approaches.
Resumo:
La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.
Resumo:
An important aspect of Process Simulators for photovoltaics is prediction of defect evolution during device fabrication. Over the last twenty years, these tools have accelerated process optimization, and several Process Simulators for iron, a ubiquitous and deleterious impurity in silicon, have been developed. The diversity of these tools can make it difficult to build intuition about the physics governing iron behavior during processing. Thus, in one unified software environment and using self-consistent terminology, we combine and describe three of these Simulators. We vary structural defect distribution and iron precipitation equations to create eight distinct Models, which we then use to simulate different stages of processing. We find that the structural defect distribution influences the final interstitial iron concentration ([Fe-i]) more strongly than the iron precipitation equations. We identify two regimes of iron behavior: (1) diffusivity-limited, in which iron evolution is kinetically limited and bulk [Fe-i] predictions can vary by an order of magnitude or more, and (2) solubility-limited, in which iron evolution is near thermodynamic equilibrium and the Models yield similar results. This rigorous analysis provides new intuition that can inform Process Simulation, material, and process development, and it enables scientists and engineers to choose an appropriate level of Model complexity based on wafer type and quality, processing conditions, and available computation time.
Resumo:
The optimal design of a vertical cantilever beam is presented in this paper. The beam is assumed immersed in an elastic Winkler soil and subjected to several loads: a point force at the tip section, its self weight and a uniform distributed load along its length. lbe optimal design problem is to find the beam of a given length and minimum volume, such that the resultant compressive stresses are admisible. This prohlem is analyzed according to linear elasticity theory and within different alternative structural models: column, Navier-Bernoulli beam-column, Timoshenko beamcolumn (i.e. with shear strain) under conservative loads, typically, constant direction loads. Results obtained in each case are compared, in order to evaluate the sensitivity of model on the numerical results. The beam optimal design is described by the section distribution layout (area, second moment, shear area etc.) along the beam span and the corresponding beam total volume. Other situations, some of them very interesting from a theoretical point of view, with follower loads (Beck and Leipholz problems) are also discussed, leaving for future work numerical details and results.
Resumo:
Pressure management (PM) is commonly used in water distribution systems (WDSs). In the last decade, a strategic objective in the field has been the development of new scientific and technical methods for its implementation. However, due to a lack of systematic analysis of the results obtained in practical cases, progress has not always been reflected in practical actions. To address this problem, this paper provides a comprehensive analysis of the most innovative issues related to PM. The methodology proposed is based on a case-study comparison of qualitative concepts that involves published work from 140 sources. The results include a qualitative analysis covering four aspects: (1) the objectives yielded by PM; (2) types of regulation, including advanced control systems through electronic controllers; (3) new methods for designing districts; and (4) development of optimization models associated with PM. The evolution of the aforementioned four aspects is examined and discussed. Conclusions regarding the current status of each factor are drawn and proposals for future research outlined