43 resultados para damage evolution process
Resumo:
Previously degradation studies carried out, over a number of different mortars by the research team, have shown that observed degradation does not exclusively depend on the solution equilibrium pH, nor the aggressive anions relative solubility. In our tests no reason was found that could allow us to explain, why same solubility anions with a lower pH are less aggressive than others. The aim of this paper is to study cement pastes behavior in aggressive environments. As observed in previous research, this cement pastes behaviors are not easily explained only taking into account only usual parameters, pH, solubility etc. Consequently the paper is about studying if solution physicochemical characteristics are more important in certain environments than specific pH values. The paper tries to obtain a degradation model, which starting from solution physicochemical parameters allows us to interpret the different behaviors shown by different composition cements. To that end, the rates of degradation of the solid phases were computed for each considered environment. Three cement have been studied: CEM I 42.5R/SR, CEM II/A-V 42.5R and CEM IV/B-(P-V) 32.5 N. The pastes have been exposed to five environments: sodium acetate/acetic acid 0.35 M, sodium sulfate solution 0.17 M, a solution representing natural water, saturated calcium hydroxide solution and laboratory environment. The attack mechanism was meant to be unidirectional, in order to achieve so; all sides of cylinders were sealed except from the attacked surface. The cylinders were taking out of the exposition environments after 2, 4, 7, 14, 30, 58 and 90 days. Both aggressive solution variations in solid phases and in different depths have been characterized. To each age and depth the calcium, magnesium and iron contents have been analyzed. Hydrated phases evolution studied, using thermal analysis, and crystalline compound changes, using X ray diffraction have been also analyzed. Sodium sulphate and water solutions stabilize an outer pH near to 8 in short time, however the stability of the most pH dependent phases is not the same. Although having similar pH and existing the possibility of forming a plaster layer near to the calcium leaching surface, this stability is greater than other sulphate solutions. Stability variations of solids formed by inverse diffusion, determine the rate of degradation.
Resumo:
Mechanical stability of EWT solar cells deteriorates when holes are created in the wafer. Nevertheless, the chemical etching after the hole generation process improves the mechanical strength by removing part of the damage produced in the drilling process. Several sets of wafers with alkaline baths of different duration have been prepared. The mechanical strength has been measured by the ring on ring bending test and the failure stresses have been obtained through a FE simulation of the test. This paper shows the comparison of these groups of wafers in order to obtain an optimum value of the decreased thickness produced by the chemical etching
Resumo:
Turbulent mixing is a very important issue in the study of geophysical phenomena because most fluxes arising in geophysics fluids are turbulent. We study turbulent mixing due to convection using a laboratory experimental model with two miscible fluids of different density with an initial top heavy density distribution. The fluids that form the initial unstable stratification are miscible and the turbulence will produce molecular mixing. The denser fluid comes into the lighter fluid layer and it generates several forced plumes which are gravitationally unstable. As the turbulent plumes develop, the denser fluid comes into contact with the lighter fluid layer and the mixing process grows. Their development is caused by the lateral interaction between these plumes at the complex fractal surface between the dense and light fluids
Resumo:
Software Product Line Engineering (SPLE) has proved to have significant advantages in family-based software development, but also implies the up¬front design of a product-line architecture (PLA) from which individual product applications can be engineered. The big upfront design associated with PLAs is in conflict with the current need of "being open to change". However, the turbulence of the current business climate makes change inevitable in order to stay competitive, and requires PLAs to be open to change even late in the development. The trend of "being open to change" is manifested in the Agile Software Development (ASD) paradigm, but it is spreading to the domain of SPLE. To reduce the big upfront design of PLAs as currently practiced in SPLE, new paradigms are being created, one being Agile Product Line Engineering (APLE). APLE aims to make the development of product-lines more flexible and adaptable to changes as promoted in ASD. To put APLE into practice it is necessary to make mechanisms available to assist and guide the agile construction and evolution of PLAs while complying with the "be open to change" agile principle. This thesis defines a process for "the agile construction and evolution of product-line architectures", which we refer to as Agile Product-Line Archi-tecting (APLA). The APLA process provides agile architects with a set of models for describing, documenting and tracing PLAs, as well as an algorithm to analyze change impact. Both the models and the change impact analysis offer the following capabilities: Flexibility & adaptability at the time of defining software architectures, enabling change during the incremental and iterative design of PLAs (anticipated or planned changes) and their evolution (unanticipated or unforeseen changes). Assistance in checking architectural integrity through change impact analysis in terms of architectural concerns, such as dependencies on earlier design decisions, rationale, constraints, and risks, etc.Guidance in the change decision-making process through change im¬pact analysis in terms of architectural components and connections. Therefore, APLA provides the mechanisms required to construct and evolve PLAs that can easily be refined iteration after iteration during the APLE development process. These mechanisms are provided in a modeling frame¬work called FPLA. The contributions of this thesis have been validated through the conduction of a project regarding a metering management system in electrical power networks. This case study took place in an i-smart software factory and was in collaboration with the Technical University of Madrid and Indra Software Labs. La Ingeniería de Líneas de Producto Software (Software Product Line Engi¬neering, SPLE) ha demostrado tener ventajas significativas en el desarrollo de software basado en familias de productos. SPLE es un paradigma que se basa en la reutilización sistemática de un conjunto de características comunes que comparten los productos de un mismo dominio o familia, y la personalización masiva a través de una variabilidad bien definida que diferencia unos productos de otros. Este tipo de desarrollo requiere el diseño inicial de una arquitectura de línea de productos (Product-Line Architecture, PLA) a partir de la cual los productos individuales de la familia son diseñados e implementados. La inversión inicial que hay que realizar en el diseño de PLAs entra en conflicto con la necesidad actual de estar continuamente "abierto al cam¬bio", siendo este cambio cada vez más frecuente y radical en la industria software. Para ser competitivos es inevitable adaptarse al cambio, incluso en las últimas etapas del desarrollo de productos software. Esta tendencia se manifiesta de forma especial en el paradigma de Desarrollo Ágil de Software (Agile Software Development, ASD) y se está extendiendo también al ámbito de SPLE. Con el objetivo de reducir la inversión inicial en el diseño de PLAs en la manera en que se plantea en SPLE, en los último años han surgido nuevos enfoques como la Ingeniera de Líneas de Producto Software Ágiles (Agile Product Line Engineering, APLE). APLE propone el desarrollo de líneas de producto de forma más flexible y adaptable a los cambios, iterativa e incremental. Para ello, es necesario disponer de mecanismos que ayuden y guíen a los arquitectos de líneas de producto en el diseño y evolución ágil de PLAs, mientras se cumple con el principio ágil de estar abierto al cambio. Esta tesis define un proceso para la "construcción y evolución ágil de las arquitecturas de lineas de producto software". A este proceso se le ha denominado Agile Product-Line Architecting (APLA). El proceso APLA proporciona a los arquitectos software un conjunto de modelos para de¬scribir, documentar y trazar PLAs, así como un algoritmo para analizar vel impacto del cambio. Los modelos y el análisis del impacto del cambio ofrecen: Flexibilidad y adaptabilidad a la hora de definir las arquitecturas software, facilitando el cambio durante el diseño incremental e iterativo de PLAs (cambios esperados o previstos) y su evolución (cambios no previstos). Asistencia en la verificación de la integridad arquitectónica mediante el análisis de impacto de los cambios en términos de dependencias entre decisiones de diseño, justificación de las decisiones de diseño, limitaciones, riesgos, etc. Orientación en la toma de decisiones derivadas del cambio mediante el análisis de impacto de los cambios en términos de componentes y conexiones. De esta manera, APLA se presenta como una solución para la construcción y evolución de PLAs de forma que puedan ser fácilmente refinadas iteración tras iteración de un ciclo de vida de líneas de producto ágiles. Dicha solución se ha implementado en una herramienta llamada FPLA (Flexible Product-Line Architecture) y ha sido validada mediante su aplicación en un proyecto de desarrollo de un sistema de gestión de medición en redes de energía eléctrica. Dicho proyecto ha sido desarrollado en una fábrica de software global en colaboración con la Universidad Politécnica de Madrid e Indra Software Labs.
Resumo:
The damage induced on quartz (c-SiO2) by heavy ions (F, O, Br) at MeV energies, where electronic stopping is dominant, has been investigated by RBS/C and optical methods. The two techniques indicate the formation of amorphous layers with an isotropic refractive index (n = 1.475) at fluences around 1014 cm−2 that are associated to electronic mechanisms. The kinetics of the process can be described as the superposition of linear (possibly initial Poisson curve) and sigmoidal (Avrami-type) contributions. The coexistence of the two kinetic regimes may be associated to the differential roles of the amorphous track cores and preamorphous halos. By using ions and energies whose maximum stopping power lies inside the crystal (O at 13 MeV, F at 15 MeV and F at 30 MeV) buried amorphous layer are formed and optical waveguides at the sample surface have been generated.
Resumo:
This work studies the use of ultrasonic imaging as an evaluation tool in concrete subjected to freeze–thaw (F–T) cycles. To evaluate the damage in this deterioration process, ultrasonic velocity and attenuation images have been generated from concrete specimens with and without air-entraining agents. Two parameters have been proposed from these ultrasonic images according to our experimental setup: the non-assessable area proportion (NAAP) and a weighted average velocity in terms of the NAAP. The proposed parameters have been compared with the recommended failure criteria of the ASTM and Rilem standards, which employ ultrasonic contact measurements. The principal advantage of the use of ultrasonic images and the proposed methodology in comparison with the ultrasonic velocity measurements by contact is the possibility of detection of incipient damage caused by accelerated freeze–thaw cycles.
Resumo:
Coupled device and process silumation tools, collectively known as technology computer-aided design (TCAD), have been used in the integrated circuit industry for over 30 years. These tools allow researchers to quickly converge on optimized devide designs and manufacturing processes with minimal experimental expenditures. The PV industry has been slower to adopt these tools, but is quickly developing competency in using them. This paper introduces a predictive defect engineering paradigm and simulation tool, while demonstrating its effectiveness at increasing the performance and throughput of current industrial processes. the impurity-to-efficiency (I2E) simulator is a coupled process and device simulation tool that links wafer material purity, processing parameters and cell desigh to device performance. The tool has been validated with experimental data and used successfully with partners in industry. The simulator has also been deployed in a free web-accessible applet, which is available for use by the industrial and academic communities.
Resumo:
A sustainable manufacturing process must rely on an also sustainable raw materials and energy supply. This paper is intended to show the results of the studies developed on sustainable business models for the minerals industry as a fundamental previous part of a sustainable manufacturing process. As it has happened in other economic activities, the mining and minerals industry has come under tremendous pressure to improve its social, developmental, and environmental performance. Mining, refining, and the use and disposal of minerals have in some instances led to significant local environmental and social damage. Nowadays, like in other parts of the corporate world, companies are more routinely expected to perform to ever higher standards of behavior, going well beyond achieving the best rate of return for shareholders. They are also increasingly being asked to be more transparent and subject to third-party audit or review, especially in environmental aspects. In terms of environment, there are three inter-related areas where innovation and new business models can make the biggest difference: carbon, water and biodiversity. The focus in these three areas is for two reasons. First, the industrial and energetic minerals industry has significant footprints in each of these areas. Second, these three areas are where the potential environmental impacts go beyond local stakeholders and communities, and can even have global impacts, like in the case of carbon. So prioritizing efforts in these areas will ultimately be a strategic differentiator as the industry businesses continues to grow. Over the next forty years, world?s population is predicted to rise from 6.300 million to 9.500 million people. This will mean a huge demand of natural resources. Indeed, consumption rates are such that current demand for raw materials will probably soon exceed the planet?s capacity. As awareness of the actual situation grows, the public is demanding goods and services that are even more environmentally sustainable. This means that massive efforts are required to reduce the amount of materials we use, including freshwater, minerals and oil, biodiversity, and marine resources. It?s clear that business as usual is no longer possible. Today, companies face not only the economic fallout of the financial crisis; they face the substantial challenge of transitioning to a low-carbon economy that is constrained by dwindling natural resources easily accessible. Innovative business models offer pioneering companies an early start toward the future. They can signal to consumers how to make sustainable choices and provide reward for both the consumer and the shareholder. Climate change and carbon remain major risk discontinuities that we need to better understand and deal with. In the absence of a global carbon solution, the principal objective of any individual country should be to reduce its global carbon emissions by encouraging conservation. The mineral industry internal response is to continue to focus on reducing the energy intensity of our existing operations through energy efficiency and the progressive introduction of new technology. Planning of the new projects must ensure that their energy footprint is minimal from the start. These actions will increase the long term resilience of the business to uncertain energy and carbon markets. This focus, combined with a strong demand for skills in this strategic area for the future requires an appropriate change in initial and continuing training of engineers and technicians and their awareness of the issue of eco-design. It will also need the development of measurement tools for consistent comparisons between companies and the assessments integration of the carbon footprint of mining equipments and services in a comprehensive impact study on the sustainable development of the Economy.
Resumo:
After more than 40 years of life, software evolution should be considered as a mature field. However, despite such a long history, many research questions still remain open, and controversial studies about the validity of the laws of software evolution are common. During the first part of these 40 years the laws themselves evolved to adapt to changes in both the research and the software industry environments. This process of adaption to new paradigms, standards, and practices stopped about 15 years ago, when the laws were revised for the last time. However, most controversial studies have been raised during this latter period. Based on a systematic and comprehensive literature review, in this paper we describe how and when the laws, and the software evolution field, evolved. We also address the current state of affairs about the validity of the laws, how they are perceived by the research community, and the developments and challenges that are likely to occur in the coming years.
Resumo:
This paper presents the design and implementation of an intelligent control system based on local neurofuzzy models of the milling process relayed through an Ehternet-based application. Its purpose is to control the spindle torque of a milling process by using an internal model control paradigm to modify the feed rate in real time. The stabilization of cutting cutting torque is especially necessary in milling processes such as high-spedd roughing of steel moulds and dies tha present minor geometric uncertainties. Thus, maintenance of the curring torque increaes the material removal rate and reduces the risk of damage due to excessive spindle vibration, a very sensitive and expensive component in all high-speed milling machines. Torque control is therefore an interesting challenge from an industrial point of view.
Resumo:
ome free, open-source software projects have been around for quite a long time, the longest living ones dating from the early 1980s. For some of them, detailed information about their evolution is available in source code management systems tracking all their code changes for periods of more than 15 years. This paper examines in detail the evolution of one of such projects, glibc, with the main aim of understanding how it evolved and how it matched Lehman's laws of software evolution. As a result, we have developed a methodology for studying the evolution of such long-lived projects based on the information in their source code management repository, described in detail several aspects of the history of glibc, including some activity and size metrics, and found how some of the laws of software evolution may not hold in this case
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
Helium retention in irradiated tungsten leads to swelling, pore formation, sample exfoliation and embrittlement with deleterious consequences in many applications. In particular, the use of tungsten in future nuclear fusion plants is proposed due to its good refractory properties. However, serious concerns about tungsten survivability stems from the fact that it must withstand severe irradiation conditions. In magnetic fusion as well as in inertial fusion (particularly with direct drive targets), tungsten components will be exposed to low and high energy ion irradiation (helium), respectively. A common feature is that the most detrimental situations will take place in pulsed mode, i.e., high flux irradiation. There is increasing evidence of a correlation between a high helium flux and an enhancement of detrimental effects on tungsten. Nevertheless, the nature of these effects is not well understood due to the subtleties imposed by the exact temperature profile evolution, ion energy, pulse duration, existence of impurities and simultaneous irradiation with other species. Object Kinetic Monte Carlo is the technique of choice to simulate the evolution of radiation-induced damage inside solids in large temporal and space scales. We have used the recently developed code MMonCa (Modular Monte Carlo simulator), presented at COSIRES 2012 for the first time, to study He retention (and in general defect evolution) in tungsten samples irradiated with high intensity helium pulses. The code simulates the interactions among a large variety of defects and during the irradiation stage and the subsequent annealing steps. The results show that the pulsed mode leads to significantly higher He retention at temperatures higher than 700 K. In this paper we discuss the process of He retention in terms of trap evolution. In addition, we discuss the implications of these findings for inertial fusion.
Resumo:
Storm evolution is fundamental for analysing the damage progression of the different failure modes and establishing suitable protocols for maintaining and optimally sizing structures. However, this aspect has hardly been studied and practically the whole of the studies dealing with the subject adopt the Equivalent triangle storm. As against this approach, two new ones are proposed. The first is the Equivalent Triangle Magnitude Storm model (ETMS), whose base, the triangular storm duration, D, is established such that its magnitude (area describing the storm history above the reference threshold level which sets the storm condition),HT, equals the real storm magnitude. The other is the Equivalent Triangle Number of Waves Storm (ETNWS), where the base is referred in terms of the real storm's number of waves,Nz. Three approaches are used for estimating the mean period, Tm, associated to each of the sea states defining the storm evolution, which is necessary to determine the full energy flux withstood by the structure in the course of the extreme event. Two are based on the Jonswap spectrum representativity and the other uses the bivariate Gumbel copula (Hs, Tm), resulting from adjusting the storm peaks. The representativity of the approaches proposed and those defined in specialised literature are analysed by comparing the main armour layer's progressive loss of hydraulic stability caused by real storms and that relating to theoretical ones. An empirical maximum energy flux model is used for this purpose. The agreement between the empirical and theoretical results demonstrates that the representativity of the different approaches depends on the storm characteristics and point towards a need to investigate other geometrical shapes to characterise the storm evolution associated with sea states heavily influenced by swell wave components.
Resumo:
The Partido Stream is a small torrential course that flows into the marsh of the Doñana National Park, an area that was declared a World Heritage Site in 1994. Before 1981, floods occurred, and the stream overflowed onto a floodplain. As an old alluvial fan, the floodplain has its singular orography and functionality. Fromthe floodplain, several drainage channels, locally called caño, discharged into themarsh. The Partido Streamhad themorphology of a caño and covered approximately 8 km from the old fan to the marsh. The stream was straightened and channelised in 1981 to cultivate the old fan. This resulted in floods that were concentrated between the banks in the following years, which caused the depth of water and the shear stress to increase, thus, scouring the river bed and river banks. In this case, the eroded materials were carried towards the marsh where a new alluvial fan evolved. Control measures on the old fan were implemented in 2006 to stop the development of the new alluvial fan downstream over the marsh. Thus, the stream would partially recover its original behaviour that it had before channelisation, moving forwards in a new, balanced state. The present study describes the geomorphological evolution that channelisation has caused since 1981 and the later slow process of recovery of the original hydraulic-sedimentation regime since 2006. Additionally, it deepens the understanding of the original hydraulic behaviour of the stream, combining field data and 2D simulations.