871 resultados para Expected gain


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In developing economies, consumption of electricity in residential and commercial sectors increased with economic development. In order to identify the factors for effective facilitation of standard and labeling programs, this article explores factors that affect consumer choice to energy-efficient products. Main findings are as follows: (1)Consumers in Thailand shows the highest awareness to environmental friendly concepts, followed by India and China.(2) Chosen labeled products include air-conditioners, TVs, refrigerators and washing machines, but not some popular products such as ceiling fans, electric fans or mobile phones. (3) Consumer who has higher energy conservation perception will buy energy efficient products.(4) Consumers in China, India and Thailand are sensitive to energy efficiency of products, primarily because they lead to less expenditure on electricity. (5) Labeling works to make levels of the energy efficiency of products more visible and thus helped consumers to choose the products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this work is to propose a structure for simulating power systems using behavioral models of nonlinear DC to DC converters implemented through a look-up table of gains. This structure is specially designed for converters whose output impedance depends on the load current level, e.g. quasi-resonant converters. The proposed model is a generic one whose parameters can be obtained by direct measuring the transient response at different operating points. It also includes optional functionalities for modeling converters with current limitation and current sharing in paralleling characteristics. The pusposed structured also allows including aditional characteristics of the DC to DC converter as the efficency as a function of the input voltage and the output current or overvoltage and undervoltage protections. In addition, this proposed model is valid for overdamped and underdamped situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En los últimos quince años se ha producido una liberalización de los mercados eléctricos en los distintos países de ámbito occidental que ha ido acompañado de un incremento por la preocupación por la incidencia de las distintas tecnologías de generación en el entorno medioambiental. Ello se ha traducido en la aparición de un marco regulatorio más restrictivo sobre las tecnologías de generación fósiles, con mayor incidencia en las derivadas de productos petrolíferos y carbón. A nivel mundial han ido apareciendo cambios normativos relativos a las emisiones de distintos elementos contaminantes (CO2, SO2, NOx…), que hacen que en particular las centrales térmicas de carbón vean muy afectadas su rentabilidad y funcionamiento. Esta situación ha supuesto que la tecnología de generación eléctrica con carbón haya avanzado considerablemente en los últimos años (calderas supercríticas, sistemas de desulfuración, gasificación del carbón…). No obstante, el desarrollo de la generación con energías renovables, la generación con gas mediante centrales de ciclo combinado y la opinión social relativa a la generación con carbón, principalmente en Europa, suponen un serio obstáculo a la generación con carbón. Por consiguiente, se hace necesario buscar vías para optimizar la competitividad de las centrales de carbón y el camino más razonable es mejorar el margen esperado de estas plantas y en particular el coste de adquisición del carbón. Ello se hace aún más importante por el hecho de existir numerosas centrales de carbón y un elevado número de nuevos proyectos constructivos de centrales de carbón en países asiáticos. Por consiguiente, el objeto de la presente tesis doctoral se centra en definir una metodología para optimizar la compra de carbón, desde el punto de vista económico y técnico, con destino a su consumo en una central térmica, con ello reducir el coste del carbón consumido y mejorar su competitividad. También se enfoca a determinar que herramientas pueden ser utilizadas para optimizar la gestión del carbón después de su compra y con ello abrir la posibilidad de obtener márgenes adicionales para dicho carbón. De acuerdo con este objetivo, el autor de la presente Tesis Doctoral realiza tres aportaciones novedosas en el ámbito de la contratación de carbón térmico y su optimización posterior: - Evaluación de carbones para su adquisición considerando el efecto de la calidad del carbón en el coste de generación asociado a cada carbón ofertado. - Creación, desarrollo, implantación y utilización de una potente herramienta de planificación de Combustibles. Esta herramienta, está diseñada con el objeto de determinar la solución económica óptima de aprovisionamientos, consumos y niveles de existencias para un parque de generación con centrales de carbón y fuelóleo. - La extensión de una metodología contractual habitual en el mercado spot de Gas Natural Licuado, a la contratación spot de Carbón de Importación. Esta se basa en el desarrollo de Acuerdos Marcos de Compra/Venta de carbón, que por su flexibilidad permitan obtener resultados económicos adicionales después de la compra de un carbón. Abstract In the last fifteen years, a liberalization of the electrical markets has occurred in the western countries. This process has been accompanied by an increasing concern of the impact of the different generation technologies towards the environment. This has motivated a regulated framework restricting the use of fossil fuels, impacting a great deal in coal and oil based products. Worldwide, new legal changes have been arising related to the emissions of the different pollutants (CO2, SO2, NOx…). These changes have had a deep impact in the feasibility, profit and running of coal fired power plants. This situation has motivated the coal electrical generation technologies to move forward in an important way in the last few years (supercritical furnaces, desulphuration plants, coal gasification…). Nevertheless, the development of the renewable generation, the gas combined cycle generation and the social opinion related to the coal electrical generation, mainly in Europe, have created a serious obstacle to the generation of electricity by coal. Therefore it is necessary to look for new paths in order to optimize the competitiveness of the coal fired power plants and the most reasonable way is to improve the expected margin of these plants and particularly the coal purchase cost. All of the above needs to be taken into context with the large number of existing coal fired power plants and an important number of new projects in Asian countries. Therefore, the goal of the current doctoral dissertation is focused to define a methodology to be considered in order to optimize the coal purchase, from an economical and a technical point of view. This coal, destined for power plant consumption, permits the reduction of consumption coal cost and improves the plant’s competitiveness. This document is also focused to define what tools we can use to optimize the coal management after deal closing and therefore open the possibility to get further margins. According to this goal, the author of this doctoral dissertation provides three important new ideas in the ambit of contracting steam coal and the posterior optimization: - Evaluation of coal purchases, considering the effect of coal quality on the cost of generation associated with each type of coal offered. - The creation, development, deployment and use of a strong planning tool of fuels. This tool is designed for the purpose of determining the optimal economic solution of fuel supply, consumption and stock levels for a power generation portfolio using coal and fuel oil fired power plants. - The application of a common contractual methodology in the spot market of Liquid Natural Gas, for the contracting spot imported coal. This is based on the development of Framework Agreements for the Purchasing / Sale of coal, which because of its flexibility allows for the gain of additional financial results after the purchase of coal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a pulse shaping and shortening technique for pulses generated from gain switched single mode semiconductor lasers, based on a Mach Zehnder interferometer with variable delay. The spectral and temporal characteristics of the pulses obtained with the proposed technique are investigated with numerical simulations. Experiments are performed with a Distributed Feedback laser and a Vertical Cavity Surface Emitting Laser, emitting at 1.5 µm, obtaining pulse duration reduction of 25-30%. The main asset of the proposed technique is that it can be applied to different devices and pulses, taking advantage of the flexibility of the gain switching technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the molecular programs of the generation of human dopaminergic neurons (DAn) from their ventral mesencephalic (VM) precursors is of key importance for basic studies, progress in cell therapy, drug screening and pharmacology in the context of Parkinson's disease. The nature of human DAn precursors in vitro is poorly understood, their properties unstable, and their availability highly limited. Here we present positive evidence that human VM precursors retaining their genuine properties and long-term capacity to generate A9 type Substantia nigra human DAn (hVM1 model cell line) can be propagated in culture. During a one month differentiation, these cells activate all key genes needed to progress from pro-neural and prodopaminergic precursors to mature and functional DAn. For the first time, we demonstrate that gene cascades are correctly activated during differentiation, resulting in the generation of mature DAn. These DAn have morphological and functional properties undistinguishable from those generated by VM primary neuronal cultures. In addition, we have found that the forced expression of Bcl-XL induces an increase in the expression of key developmental genes (MSX1, NGN2), maintenance of PITX3 expression temporal profile, and also enhances genes involved in DAn long-term function, maintenance and survival (EN1, LMX1B, NURR1 and PITX3). As a result, Bcl-XL anticipates and enhances DAn generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract. This paper describes a new and original method for designing oscillators based on the Normalized Determinant Function (NDF) and Return Relations (RRT)- Firstly, a review of the loop-gain method will be performed. The loop-gain method pros, cons and some examples for exploring wrong solutions provided by this method will be shown. This method produces in some cases wrong solutions because some necessary conditions have not been fulfilled. The required necessary conditions to assure a right solution will be described. The necessity of using the NDF or the Transpose Return Relations (RRT), which are related with the True Loop-Gain, to test the additional conditions will be demonstrated. To conclude this paper, the steps for oscillator design and analysis, using the proposed NDF/RRj method, will be presented. The loop-gain wrong solutions will be compared with the NDF/RRj and the accuracy of this method to estimate the oscillation frequency and QL will be demonstrated. Some additional examples of plane reference oscillators (Z/Y/T), will be added and they will be analyzed with the new NDF/RRj proposed method, even these oscillators cannot be analyzed using the classic loop gain method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an analytical model for studying optical bistability in semiconductor lasers that exhibit a logarithmic dependence of the optical gain on carrier concentration. Model results are shown for a Fabry–Pérot quantum-well laser and compared with the predictions of a commercial computer-aided design (CAD) software tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze the gain-switching dynamics of two-section tapered lasers by means of a simplified three-rate-equation model. The goal is to improve the understanding of the underlying physics and to optimize the device geometry to achieve high power short duration optical pulses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La ganancia de peso en el embarazo puede prevenirse mediante un programa de ejercicio físico.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La corrosión del acero es una de las patologías más importantes que afectan a las estructuras de hormigón armado que están expuestas a ambientes marinos o al ataque de sales fundentes. Cuando se produce corrosión, se genera una capa de óxido alrededor de la superficie de las armaduras, que ocupa un volumen mayor que el acero inicial; como consecuencia, el óxido ejerce presiones internas en el hormigón circundante, que lleva a la fisuración y, ocasionalmente, al desprendimiento del recubrimiento de hormigón. Durante los últimos años, numerosos estudios han contribuido a ampliar el conocimiento sobre el proceso de fisuración; sin embargo, aún existen muchas incertidumbres respecto al comportamiento mecánico de la capa de óxido, que es fundamental para predecir la fisuración. Por ello, en esta tesis se ha desarrollado y aplicado una metodología, para mejorar el conocimiento respecto al comportamiento del sistema acero-óxido-hormigón, combinando experimentos y simulaciones numéricas. Se han realizado ensayos de corrosión acelerada en condiciones de laboratorio, utilizando la técnica de corriente impresa. Con el objetivo de obtener información cercana a la capa de acero, como muestras se seleccionaron prismas de hormigón con un tubo de acero liso como armadura, que se diseñaron para conseguir la formación de una única fisura principal en el recubrimiento. Durante los ensayos, las muestras se equiparon con instrumentos especialmente diseñados para medir la variación de diámetro y volumen interior de los tubos, y se midió la apertura de la fisura principal utilizando un extensómetro comercial, adaptado a la geometría de las muestras. Las condiciones de contorno se diseñaron cuidadosamente para que los campos de corriente y deformación fuesen planos durante los ensayos, resultando en corrosión uniforme a lo largo del tubo, para poder reproducir los ensayos en simulaciones numéricas. Se ensayaron series con varias densidades de corriente y varias profundidades de corrosión. De manera complementaria, el comportamiento en fractura del hormigón se caracterizó en ensayos independientes, y se midió la pérdida gravimétrica de los tubos siguiendo procedimientos estándar. En todos los ensayos, la fisura principal creció muy despacio durante las primeras micras de profundidad de corrosión, pero después de una cierta profundidad crítica, la fisura se desarrolló completamente, con un aumento rápido de su apertura; la densidad de corriente influye en la profundidad de corrosión crítica. Las variaciones de diámetro interior y de volumen interior de los tubos mostraron tendencias diferentes entre sí, lo que indica que la deformación del tubo no fue uniforme. Después de la corrosión acelerada, las muestras se cortaron en rebanadas, que se utilizaron en ensayos post-corrosión. El patrón de fisuración se estudió a lo largo del tubo, en rebanadas que se impregnaron en vacío con resina y fluoresceína para mejorar la visibilidad de las fisuras bajo luz ultravioleta, y se estudió la presencia de óxido dentro de las grietas. En todas las muestras, se formó una fisura principal en el recubrimiento, infiltrada con óxido, y varias fisuras secundarias finas alrededor del tubo; el número de fisuras varió con la profundidad de corrosión de las muestras. Para muestras con la misma corrosión, el número de fisuras y su posición fue diferente entre muestras y entre secciones de una misma muestra, debido a la heterogeneidad del hormigón. Finalmente, se investigó la adherencia entre el acero y el hormigón, utilizando un dispositivo diseñado para empujar el tubo en el hormigón. Las curvas de tensión frente a desplazamiento del tubo presentaron un pico marcado, seguido de un descenso constante; la profundidad de corrosión y la apertura de fisura de las muestras influyeron notablemente en la tensión residual del ensayo. Para simular la fisuración del hormigón causada por la corrosión de las armaduras, se programó un modelo numérico. Éste combina elementos finitos con fisura embebida adaptable que reproducen la fractura del hormigón conforme al modelo de fisura cohesiva estándar, y elementos de interfaz llamados elementos junta expansiva, que se programaron específicamente para reproducir la expansión volumétrica del óxido y que incorporan su comportamiento mecánico. En el elemento junta expansiva se implementó un fenómeno de despegue, concretamente de deslizamiento y separación, que resultó fundamental para obtener localización de fisuras adecuada, y que se consiguió con una fuerte reducción de la rigidez tangencial y la rigidez en tracción del óxido. Con este modelo, se realizaron simulaciones de los ensayos, utilizando modelos bidimensionales de las muestras con elementos finitos. Como datos para el comportamiento en fractura del hormigón, se utilizaron las propiedades determinadas en experimentos. Para el óxido, inicialmente se supuso un comportamiento fluido, con deslizamiento y separación casi perfectos. Después, se realizó un ajuste de los parámetros del elemento junta expansiva para reproducir los resultados experimentales. Se observó que variaciones en la rigidez normal del óxido apenas afectaban a los resultados, y que los demás parámetros apenas afectaban a la apertura de fisura; sin embargo, la deformación del tubo resultó ser muy sensible a variaciones en los parámetros del óxido, debido a la flexibilidad de la pared de los tubos, lo que resultó fundamental para determinar indirectamente los valores de los parámetros constitutivos del óxido. Finalmente, se realizaron simulaciones definitivas de los ensayos. El modelo reprodujo la profundidad de corrosión crítica y el comportamiento final de las curvas experimentales; se comprobó que la variación de diámetro interior de los tubos está fuertemente influenciada por su posición relativa respecto a la fisura principal, en concordancia con los resultados experimentales. De la comparación de los resultados experimentales y numéricos, se pudo extraer información sobre las propiedades del óxido que de otra manera no habría podido obtenerse. Corrosion of steel is one of the main pathologies affecting reinforced concrete structures exposed to marine environments or to molten salt. When corrosion occurs, an oxide layer develops around the reinforcement surface, which occupies a greater volume than the initial steel; thus, it induces internal pressure on the surrounding concrete that leads to cracking and, eventually, to full-spalling of the concrete cover. During the last years much effort has been devoted to understand the process of cracking; however, there is still a lack of knowledge regarding the mechanical behavior of the oxide layer, which is essential in the prediction of cracking. Thus, a methodology has been developed and applied in this thesis to gain further understanding of the behavior of the steel-oxide-concrete system, combining experiments and numerical simulations. Accelerated corrosion tests were carried out in laboratory conditions, using the impressed current technique. To get experimental information close to the oxide layer, concrete prisms with a smooth steel tube as reinforcement were selected as specimens, which were designed to get a single main crack across the cover. During the tests, the specimens were equipped with instruments that were specially designed to measure the variation of inner diameter and volume of the tubes, and the width of the main crack was recorded using a commercial extensometer that was adapted to the geometry of the specimens. The boundary conditions were carefully designed so that plane current and strain fields were expected during the tests, resulting in nearly uniform corrosion along the length of the tube, so that the tests could be reproduced in numerical simulations. Series of tests were carried out with various current densities and corrosion depths. Complementarily, the fracture behavior of concrete was characterized in independent tests, and the gravimetric loss of the steel tubes was determined by standard means. In all the tests, the main crack grew very slowly during the first microns of corrosion depth, but after a critical corrosion depth it fully developed and opened faster; the current density influenced the critical corrosion depth. The variation of inner diameter and inner volume of the tubes had different trends, which indicates that the deformation of the tube was not uniform. After accelerated corrosion, the specimens were cut into slices, which were used in post-corrosion tests. The pattern of cracking along the reinforcement was investigated in slices that were impregnated under vacuum with resin containing fluorescein to enhance the visibility of cracks under ultraviolet lightening and a study was carried out to assess the presence of oxide into the cracks. In all the specimens, a main crack developed through the concrete cover, which was infiltrated with oxide, and several thin secondary cracks around the reinforcement; the number of cracks diminished with the corrosion depth of the specimen. For specimens with the same corrosion, the number of cracks and their position varied from one specimen to another and between cross-sections of a given specimen, due to the heterogeneity of concrete. Finally, the bond between the steel and the concrete was investigated, using a device designed to push the tubes of steel in the concrete. The curves of stress versus displacement of the tube presented a marked peak, followed by a steady descent, with notably influence of the corrosion depth and the crack width on the residual stress. To simulate cracking of concrete due to corrosion of the reinforcement, a numerical model was implemented. It combines finite elements with an embedded adaptable crack that reproduces cracking of concrete according to the basic cohesive model, and interface elements so-called expansive joint elements, which were specially designed to reproduce the volumetric expansion of oxide and incorporate its mechanical behavior. In the expansive joint element, a debonding effect was implemented consisting of sliding and separation, which was proved to be essential to achieve proper localization of cracks, and was achieved by strongly reducing the shear and the tensile stiffnesses of the oxide. With that model, simulations of the accelerated corrosion tests were carried out on 2- dimensional finite element models of the specimens. For the fracture behavior of concrete, the properties experimentally determined were used as input. For the oxide, initially a fluidlike behavior was assumed with nearly perfect sliding and separation; then the parameters of the expansive joint element were modified to fit the experimental results. Changes in the bulk modulus of the oxide barely affected the results and changes in the remaining parameters had a moderate effect on the predicted crack width; however, the deformation of the tube was very sensitive to variations in the parameters of oxide, due to the flexibility of the tube wall, which was crucial for indirect determination of the constitutive parameters of oxide. Finally, definitive simulations of the tests were carried out. The model reproduced the critical corrosion depth and the final behavior of the experimental curves; it was assessed that the variation of inner diameter of the tubes is highly influenced by its relative position with respect to the main crack, in accordance with the experimental observations. From the comparison of the experimental and numerical results, some properties of the mechanical behavior of the oxide were disclosed that otherwise could not have been measured.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El presente Trabajo fin Fin de Máster, versa sobre una caracterización preliminar del comportamiento de un robot de tipo industrial, configurado por 4 eslabones y 4 grados de libertad, y sometido a fuerzas de mecanizado en su extremo. El entorno de trabajo planteado es el de plantas de fabricación de piezas de aleaciones de aluminio para automoción. Este tipo de componentes parte de un primer proceso de fundición que saca la pieza en bruto. Para series medias y altas, en función de las propiedades mecánicas y plásticas requeridas y los costes de producción, la inyección a alta presión (HPDC) y la fundición a baja presión (LPC) son las dos tecnologías más usadas en esta primera fase. Para inyección a alta presión, las aleaciones de aluminio más empleadas son, en designación simbólica según norma EN 1706 (entre paréntesis su designación numérica); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). Para baja presión, EN AC AlSi7Mg0,3 (EN AC 42100). En los 3 primeros casos, los límites de Silicio permitidos pueden superan el 10%. En el cuarto caso, es inferior al 10% por lo que, a los efectos de ser sometidas a mecanizados, las piezas fabricadas en aleaciones con Si superior al 10%, se puede considerar que son equivalentes, diferenciándolas de la cuarta. Las tolerancias geométricas y dimensionales conseguibles directamente de fundición, recogidas en normas como ISO 8062 o DIN 1688-1, establecen límites para este proceso. Fuera de esos límites, las garantías en conseguir producciones con los objetivos de ppms aceptados en la actualidad por el mercado, obligan a ir a fases posteriores de mecanizado. Aquellas geometrías que, funcionalmente, necesitan disponer de unas tolerancias geométricas y/o dimensionales definidas acorde a ISO 1101, y no capaces por este proceso inicial de moldeado a presión, deben ser procesadas en una fase posterior en células de mecanizado. En este caso, las tolerancias alcanzables para procesos de arranque de viruta se recogen en normas como ISO 2768. Las células de mecanizado se componen, por lo general, de varios centros de control numérico interrelacionados y comunicados entre sí por robots que manipulan las piezas en proceso de uno a otro. Dichos robots, disponen en su extremo de una pinza utillada para poder coger y soltar las piezas en los útiles de mecanizado, las mesas de intercambio para cambiar la pieza de posición o en utillajes de equipos de medición y prueba, o en cintas de entrada o salida. La repetibilidad es alta, de centésimas incluso, definida según norma ISO 9283. El problema es que, estos rangos de repetibilidad sólo se garantizan si no se hacen esfuerzos o éstos son despreciables (caso de mover piezas). Aunque las inercias de mover piezas a altas velocidades hacen que la trayectoria intermedia tenga poca precisión, al inicio y al final (al coger y dejar pieza, p.e.) se hacen a velocidades relativamente bajas que hacen que el efecto de las fuerzas de inercia sean menores y que permiten garantizar la repetibilidad anteriormente indicada. No ocurre así si se quitara la garra y se intercambia con un cabezal motorizado con una herramienta como broca, mandrino, plato de cuchillas, fresas frontales o tangenciales… Las fuerzas ejercidas de mecanizado generarían unos pares en las uniones tan grandes y tan variables que el control del robot no sería capaz de responder (o no está preparado, en un principio) y generaría una desviación en la trayectoria, realizada a baja velocidad, que desencadenaría en un error de posición (ver norma ISO 5458) no asumible para la funcionalidad deseada. Se podría llegar al caso de que la tolerancia alcanzada por un pretendido proceso más exacto diera una dimensión peor que la que daría el proceso de fundición, en principio con mayor variabilidad dimensional en proceso (y por ende con mayor intervalo de tolerancia garantizable). De hecho, en los CNCs, la precisión es muy elevada, (pudiéndose despreciar en la mayoría de los casos) y no es la responsable de, por ejemplo la tolerancia de posición al taladrar un agujero. Factores como, temperatura de la sala y de la pieza, calidad constructiva de los utillajes y rigidez en el amarre, error en el giro de mesas y de colocación de pieza, si lleva agujeros previos o no, si la herramienta está bien equilibrada y el cono es el adecuado para el tipo de mecanizado… influyen más. Es interesante que, un elemento no específico tan común en una planta industrial, en el entorno anteriormente descrito, como es un robot, el cual no sería necesario añadir por disponer de él ya (y por lo tanto la inversión sería muy pequeña), puede mejorar la cadena de valor disminuyendo el costo de fabricación. Y si se pudiera conjugar que ese robot destinado a tareas de manipulación, en los muchos tiempos de espera que va a disfrutar mientras el CNC arranca viruta, pudiese coger un cabezal y apoyar ese mecanizado; sería doblemente interesante. Por lo tanto, se antoja sugestivo poder conocer su comportamiento e intentar explicar qué sería necesario para llevar esto a cabo, motivo de este trabajo. La arquitectura de robot seleccionada es de tipo SCARA. La búsqueda de un robot cómodo de modelar y de analizar cinemática y dinámicamente, sin limitaciones relevantes en la multifuncionalidad de trabajos solicitados, ha llevado a esta elección, frente a otras arquitecturas como por ejemplo los robots antropomórficos de 6 grados de libertad, muy populares a nivel industrial. Este robot dispone de 3 uniones, de las cuales 2 son de tipo par de revolución (1 grado de libertad cada una) y la tercera es de tipo corredera o par cilíndrico (2 grados de libertad). La primera unión, de tipo par de revolución, sirve para unir el suelo (considerado como eslabón número 1) con el eslabón número 2. La segunda unión, también de ese tipo, une el eslabón número 2 con el eslabón número 3. Estos 2 brazos, pueden describir un movimiento horizontal, en el plano X-Y. El tercer eslabón, está unido al eslabón número 4 por la unión de tipo corredera. El movimiento que puede describir es paralelo al eje Z. El robot es de 4 grados de libertad (4 motores). En relación a los posibles trabajos que puede realizar este tipo de robot, su versatilidad abarca tanto operaciones típicas de manipulación como operaciones de arranque de viruta. Uno de los mecanizados más usuales es el taladrado, por lo cual se elige éste para su modelización y análisis. Dentro del taladrado se elegirá para acotar las fuerzas, taladrado en macizo con broca de diámetro 9 mm. El robot se ha considerado por el momento que tenga comportamiento de sólido rígido, por ser el mayor efecto esperado el de los pares en las uniones. Para modelar el robot se utiliza el método de los sistemas multicuerpos. Dentro de este método existen diversos tipos de formulaciones (p.e. Denavit-Hartenberg). D-H genera una cantidad muy grande de ecuaciones e incógnitas. Esas incógnitas son de difícil comprensión y, para cada posición, hay que detenerse a pensar qué significado tienen. Se ha optado por la formulación de coordenadas naturales. Este sistema utiliza puntos y vectores unitarios para definir la posición de los distintos cuerpos, y permite compartir, cuando es posible y se quiere, para definir los pares cinemáticos y reducir al mismo tiempo el número de variables. Las incógnitas son intuitivas, las ecuaciones de restricción muy sencillas y se reduce considerablemente el número de ecuaciones e incógnitas. Sin embargo, las coordenadas naturales “puras” tienen 2 problemas. El primero, que 2 elementos con un ángulo de 0 o 180 grados, dan lugar a puntos singulares que pueden crear problemas en las ecuaciones de restricción y por lo tanto han de evitarse. El segundo, que tampoco inciden directamente sobre la definición o el origen de los movimientos. Por lo tanto, es muy conveniente complementar esta formulación con ángulos y distancias (coordenadas relativas). Esto da lugar a las coordenadas naturales mixtas, que es la formulación final elegida para este TFM. Las coordenadas naturales mixtas no tienen el problema de los puntos singulares. Y la ventaja más importante reside en su utilidad a la hora de aplicar fuerzas motrices, momentos o evaluar errores. Al incidir sobre la incógnita origen (ángulos o distancias) controla los motores de manera directa. El algoritmo, la simulación y la obtención de resultados se ha programado mediante Matlab. Para realizar el modelo en coordenadas naturales mixtas, es preciso modelar en 2 pasos el robot a estudio. El primer modelo se basa en coordenadas naturales. Para su validación, se plantea una trayectoria definida y se analiza cinemáticamente si el robot satisface el movimiento solicitado, manteniendo su integridad como sistema multicuerpo. Se cuantifican los puntos (en este caso inicial y final) que configuran el robot. Al tratarse de sólidos rígidos, cada eslabón queda definido por sus respectivos puntos inicial y final (que son los más interesantes para la cinemática y la dinámica) y por un vector unitario no colineal a esos 2 puntos. Los vectores unitarios se colocan en los lugares en los que se tenga un eje de rotación o cuando se desee obtener información de un ángulo. No son necesarios vectores unitarios para medir distancias. Tampoco tienen por qué coincidir los grados de libertad con el número de vectores unitarios. Las longitudes de cada eslabón quedan definidas como constantes geométricas. Se establecen las restricciones que definen la naturaleza del robot y las relaciones entre los diferentes elementos y su entorno. La trayectoria se genera por una nube de puntos continua, definidos en coordenadas independientes. Cada conjunto de coordenadas independientes define, en un instante concreto, una posición y postura de robot determinada. Para conocerla, es necesario saber qué coordenadas dependientes hay en ese instante, y se obtienen resolviendo por el método de Newton-Rhapson las ecuaciones de restricción en función de las coordenadas independientes. El motivo de hacerlo así es porque las coordenadas dependientes deben satisfacer las restricciones, cosa que no ocurre con las coordenadas independientes. Cuando la validez del modelo se ha probado (primera validación), se pasa al modelo 2. El modelo número 2, incorpora a las coordenadas naturales del modelo número 1, las coordenadas relativas en forma de ángulos en los pares de revolución (3 ángulos; ϕ1, ϕ 2 y ϕ3) y distancias en los pares prismáticos (1 distancia; s). Estas coordenadas relativas pasan a ser las nuevas coordenadas independientes (sustituyendo a las coordenadas independientes cartesianas del modelo primero, que eran coordenadas naturales). Es necesario revisar si el sistema de vectores unitarios del modelo 1 es suficiente o no. Para este caso concreto, se han necesitado añadir 1 vector unitario adicional con objeto de que los ángulos queden perfectamente determinados con las correspondientes ecuaciones de producto escalar y/o vectorial. Las restricciones habrán de ser incrementadas en, al menos, 4 ecuaciones; una por cada nueva incógnita. La validación del modelo número 2, tiene 2 fases. La primera, al igual que se hizo en el modelo número 1, a través del análisis cinemático del comportamiento con una trayectoria definida. Podrían obtenerse del modelo 2 en este análisis, velocidades y aceleraciones, pero no son necesarios. Tan sólo interesan los movimientos o desplazamientos finitos. Comprobada la coherencia de movimientos (segunda validación), se pasa a analizar cinemáticamente el comportamiento con trayectorias interpoladas. El análisis cinemático con trayectorias interpoladas, trabaja con un número mínimo de 3 puntos máster. En este caso se han elegido 3; punto inicial, punto intermedio y punto final. El número de interpolaciones con el que se actúa es de 50 interpolaciones en cada tramo (cada 2 puntos máster hay un tramo), resultando un total de 100 interpolaciones. El método de interpolación utilizado es el de splines cúbicas con condición de aceleración inicial y final constantes, que genera las coordenadas independientes de los puntos interpolados de cada tramo. Las coordenadas dependientes se obtienen resolviendo las ecuaciones de restricción no lineales con el método de Newton-Rhapson. El método de las splines cúbicas es muy continuo, por lo que si se desea modelar una trayectoria en el que haya al menos 2 movimientos claramente diferenciados, es preciso hacerlo en 2 tramos y unirlos posteriormente. Sería el caso en el que alguno de los motores se desee expresamente que esté parado durante el primer movimiento y otro distinto lo esté durante el segundo movimiento (y así sucesivamente). Obtenido el movimiento, se calculan, también mediante fórmulas de diferenciación numérica, las velocidades y aceleraciones independientes. El proceso es análogo al anteriormente explicado, recordando la condición impuesta de que la aceleración en el instante t= 0 y en instante t= final, se ha tomado como 0. Las velocidades y aceleraciones dependientes se calculan resolviendo las correspondientes derivadas de las ecuaciones de restricción. Se comprueba, de nuevo, en una tercera validación del modelo, la coherencia del movimiento interpolado. La dinámica inversa calcula, para un movimiento definido -conocidas la posición, velocidad y la aceleración en cada instante de tiempo-, y conocidas las fuerzas externas que actúan (por ejemplo el peso); qué fuerzas hay que aplicar en los motores (donde hay control) para que se obtenga el citado movimiento. En la dinámica inversa, cada instante del tiempo es independiente de los demás y tiene una posición, una velocidad y una aceleración y unas fuerzas conocidas. En este caso concreto, se desean aplicar, de momento, sólo las fuerzas debidas al peso, aunque se podrían haber incorporado fuerzas de otra naturaleza si se hubiese deseado. Las posiciones, velocidades y aceleraciones, proceden del cálculo cinemático. El efecto inercial de las fuerzas tenidas en cuenta (el peso) es calculado. Como resultado final del análisis dinámico inverso, se obtienen los pares que han de ejercer los cuatro motores para replicar el movimiento prescrito con las fuerzas que estaban actuando. La cuarta validación del modelo consiste en confirmar que el movimiento obtenido por aplicar los pares obtenidos en la dinámica inversa, coinciden con el obtenido en el análisis cinemático (movimiento teórico). Para ello, es necesario acudir a la dinámica directa. La dinámica directa se encarga de calcular el movimiento del robot, resultante de aplicar unos pares en motores y unas fuerzas en el robot. Por lo tanto, el movimiento real resultante, al no haber cambiado ninguna condición de las obtenidas en la dinámica inversa (pares de motor y fuerzas inerciales debidas al peso de los eslabones) ha de ser el mismo al movimiento teórico. Siendo así, se considera que el robot está listo para trabajar. Si se introduce una fuerza exterior de mecanizado no contemplada en la dinámica inversa y se asigna en los motores los mismos pares resultantes de la resolución del problema dinámico inverso, el movimiento real obtenido no es igual al movimiento teórico. El control de lazo cerrado se basa en ir comparando el movimiento real con el deseado e introducir las correcciones necesarias para minimizar o anular las diferencias. Se aplican ganancias en forma de correcciones en posición y/o velocidad para eliminar esas diferencias. Se evalúa el error de posición como la diferencia, en cada punto, entre el movimiento teórico deseado en el análisis cinemático y el movimiento real obtenido para cada fuerza de mecanizado y una ganancia concreta. Finalmente, se mapea el error de posición obtenido para cada fuerza de mecanizado y las diferentes ganancias previstas, graficando la mejor precisión que puede dar el robot para cada operación que se le requiere, y en qué condiciones. -------------- This Master´s Thesis deals with a preliminary characterization of the behaviour for an industrial robot, configured with 4 elements and 4 degrees of freedoms, and subjected to machining forces at its end. Proposed working conditions are those typical from manufacturing plants with aluminium alloys for automotive industry. This type of components comes from a first casting process that produces rough parts. For medium and high volumes, high pressure die casting (HPDC) and low pressure die casting (LPC) are the most used technologies in this first phase. For high pressure die casting processes, most used aluminium alloys are, in simbolic designation according EN 1706 standard (between brackets, its numerical designation); EN AC AlSi9Cu3(Fe) (EN AC 46000) , EN AC AlSi9Cu3(Fe)(Zn) (EN AC 46500), y EN AC AlSi12Cu1(Fe) (EN AC 47100). For low pressure, EN AC AlSi7Mg0,3 (EN AC 42100). For the 3 first alloys, Si allowed limits can exceed 10% content. Fourth alloy has admisible limits under 10% Si. That means, from the point of view of machining, that components made of alloys with Si content above 10% can be considered as equivalent, and the fourth one must be studied separately. Geometrical and dimensional tolerances directly achievables from casting, gathered in standards such as ISO 8062 or DIN 1688-1, establish a limit for this process. Out from those limits, guarantees to achieve batches with objetive ppms currently accepted by market, force to go to subsequent machining process. Those geometries that functionally require a geometrical and/or dimensional tolerance defined according ISO 1101, not capable with initial moulding process, must be obtained afterwards in a machining phase with machining cells. In this case, tolerances achievables with cutting processes are gathered in standards such as ISO 2768. In general terms, machining cells contain several CNCs that they are interrelated and connected by robots that handle parts in process among them. Those robots have at their end a gripper in order to take/remove parts in machining fixtures, in interchange tables to modify position of part, in measurement and control tooling devices, or in entrance/exit conveyors. Repeatibility for robot is tight, even few hundredths of mm, defined according ISO 9283. Problem is like this; those repeatibilty ranks are only guaranteed when there are no stresses or they are not significant (f.e. due to only movement of parts). Although inertias due to moving parts at a high speed make that intermediate paths have little accuracy, at the beginning and at the end of trajectories (f.e, when picking part or leaving it) movement is made with very slow speeds that make lower the effect of inertias forces and allow to achieve repeatibility before mentioned. It does not happens the same if gripper is removed and it is exchanged by an spindle with a machining tool such as a drilling tool, a pcd boring tool, a face or a tangential milling cutter… Forces due to machining would create such big and variable torques in joints that control from the robot would not be able to react (or it is not prepared in principle) and would produce a deviation in working trajectory, made at a low speed, that would trigger a position error (see ISO 5458 standard) not assumable for requested function. Then it could be possible that tolerance achieved by a more exact expected process would turn out into a worst dimension than the one that could be achieved with casting process, in principle with a larger dimensional variability in process (and hence with a larger tolerance range reachable). As a matter of fact, accuracy is very tight in CNC, (its influence can be ignored in most cases) and it is not the responsible of, for example position tolerance when drilling a hole. Factors as, room and part temperature, manufacturing quality of machining fixtures, stiffness at clamping system, rotating error in 4th axis and part positioning error, if there are previous holes, if machining tool is properly balanced, if shank is suitable for that machining type… have more influence. It is interesting to know that, a non specific element as common, at a manufacturing plant in the enviroment above described, as a robot (not needed to be added, therefore with an additional minimum investment), can improve value chain decreasing manufacturing costs. And when it would be possible to combine that the robot dedicated to handling works could support CNCs´ works in its many waiting time while CNCs cut, and could take an spindle and help to cut; it would be double interesting. So according to all this, it would be interesting to be able to know its behaviour and try to explain what would be necessary to make this possible, reason of this work. Selected robot architecture is SCARA type. The search for a robot easy to be modeled and kinematically and dinamically analyzed, without significant limits in the multifunctionality of requested operations, has lead to this choice. Due to that, other very popular architectures in the industry, f.e. 6 DOFs anthropomorphic robots, have been discarded. This robot has 3 joints, 2 of them are revolute joints (1 DOF each one) and the third one is a cylindrical joint (2 DOFs). The first joint, a revolute one, is used to join floor (body 1) with body 2. The second one, a revolute joint too, joins body 2 with body 3. These 2 bodies can move horizontally in X-Y plane. Body 3 is linked to body 4 with a cylindrical joint. Movement that can be made is paralell to Z axis. The robt has 4 degrees of freedom (4 motors). Regarding potential works that this type of robot can make, its versatility covers either typical handling operations or cutting operations. One of the most common machinings is to drill. That is the reason why it has been chosen for the model and analysis. Within drilling, in order to enclose spectrum force, a typical solid drilling with 9 mm diameter. The robot is considered, at the moment, to have a behaviour as rigid body, as biggest expected influence is the one due to torques at joints. In order to modelize robot, it is used multibodies system method. There are under this heading different sorts of formulations (f.e. Denavit-Hartenberg). D-H creates a great amount of equations and unknown quantities. Those unknown quatities are of a difficult understanding and, for each position, one must stop to think about which meaning they have. The choice made is therefore one of formulation in natural coordinates. This system uses points and unit vectors to define position of each different elements, and allow to share, when it is possible and wished, to define kinematic torques and reduce number of variables at the same time. Unknown quantities are intuitive, constrain equations are easy and number of equations and variables are strongly reduced. However, “pure” natural coordinates suffer 2 problems. The first one is that 2 elements with an angle of 0° or 180°, give rise to singular positions that can create problems in constrain equations and therefore they must be avoided. The second problem is that they do not work directly over the definition or the origin of movements. Given that, it is highly recommended to complement this formulation with angles and distances (relative coordinates). This leads to mixed natural coordinates, and they are the final formulation chosen for this MTh. Mixed natural coordinates have not the problem of singular positions. And the most important advantage lies in their usefulness when applying driving forces, torques or evaluating errors. As they influence directly over origin variable (angles or distances), they control motors directly. The algorithm, simulation and obtaining of results has been programmed with Matlab. To design the model in mixed natural coordinates, it is necessary to model the robot to be studied in 2 steps. The first model is based in natural coordinates. To validate it, it is raised a defined trajectory and it is kinematically analyzed if robot fulfils requested movement, keeping its integrity as multibody system. The points (in this case starting and ending points) that configure the robot are quantified. As the elements are considered as rigid bodies, each of them is defined by its respectively starting and ending point (those points are the most interesting ones from the point of view of kinematics and dynamics) and by a non-colinear unit vector to those points. Unit vectors are placed where there is a rotating axis or when it is needed information of an angle. Unit vectors are not needed to measure distances. Neither DOFs must coincide with the number of unit vectors. Lengths of each arm are defined as geometrical constants. The constrains that define the nature of the robot and relationships among different elements and its enviroment are set. Path is generated by a cloud of continuous points, defined in independent coordinates. Each group of independent coordinates define, in an specific instant, a defined position and posture for the robot. In order to know it, it is needed to know which dependent coordinates there are in that instant, and they are obtained solving the constraint equations with Newton-Rhapson method according to independent coordinates. The reason to make it like this is because dependent coordinates must meet constraints, and this is not the case with independent coordinates. When suitability of model is checked (first approval), it is given next step to model 2. Model 2 adds to natural coordinates from model 1, the relative coordinates in the shape of angles in revoluting torques (3 angles; ϕ1, ϕ 2 and ϕ3) and distances in prismatic torques (1 distance; s). These relative coordinates become the new independent coordinates (replacing to cartesian independent coordinates from model 1, that they were natural coordinates). It is needed to review if unit vector system from model 1 is enough or not . For this specific case, it was necessary to add 1 additional unit vector to define perfectly angles with their related equations of dot and/or cross product. Constrains must be increased in, at least, 4 equations; one per each new variable. The approval of model 2 has two phases. The first one, same as made with model 1, through kinematic analysis of behaviour with a defined path. During this analysis, it could be obtained from model 2, velocities and accelerations, but they are not needed. They are only interesting movements and finite displacements. Once that the consistence of movements has been checked (second approval), it comes when the behaviour with interpolated trajectories must be kinematically analyzed. Kinematic analysis with interpolated trajectories work with a minimum number of 3 master points. In this case, 3 points have been chosen; starting point, middle point and ending point. The number of interpolations has been of 50 ones in each strecht (each 2 master points there is an strecht), turning into a total of 100 interpolations. The interpolation method used is the cubic splines one with condition of constant acceleration both at the starting and at the ending point. This method creates the independent coordinates of interpolated points of each strecht. The dependent coordinates are achieved solving the non-linear constrain equations with Newton-Rhapson method. The method of cubic splines is very continuous, therefore when it is needed to design a trajectory in which there are at least 2 movements clearly differents, it is required to make it in 2 steps and join them later. That would be the case when any of the motors would keep stopped during the first movement, and another different motor would remain stopped during the second movement (and so on). Once that movement is obtained, they are calculated, also with numerical differenciation formulas, the independent velocities and accelerations. This process is analogous to the one before explained, reminding condition that acceleration when t=0 and t=end are 0. Dependent velocities and accelerations are calculated solving related derivatives of constrain equations. In a third approval of the model it is checked, again, consistence of interpolated movement. Inverse dynamics calculates, for a defined movement –knowing position, velocity and acceleration in each instant of time-, and knowing external forces that act (f.e. weights); which forces must be applied in motors (where there is control) in order to obtain requested movement. In inverse dynamics, each instant of time is independent of the others and it has a position, a velocity, an acceleration and known forces. In this specific case, it is intended to apply, at the moment, only forces due to the weight, though forces of another nature could have been added if it would have been preferred. The positions, velocities and accelerations, come from kinematic calculation. The inertial effect of forces taken into account (weight) is calculated. As final result of the inverse dynamic analysis, the are obtained torques that the 4 motors must apply to repeat requested movement with the forces that were acting. The fourth approval of the model consists on confirming that the achieved movement due to the use of the torques obtained in the inverse dynamics, are in accordance with movements from kinematic analysis (theoretical movement). For this, it is necessary to work with direct dynamics. Direct dynamic is in charge of calculating the movements of robot that results from applying torques at motors and forces at the robot. Therefore, the resultant real movement, as there was no change in any condition of the ones obtained at the inverse dynamics (motor torques and inertial forces due to weight of elements) must be the same than theoretical movement. When these results are achieved, it is considered that robot is ready to work. When a machining external force is introduced and it was not taken into account before during the inverse dynamics, and torques at motors considered are the ones of the inverse dynamics, the real movement obtained is not the same than the theoretical movement. Closed loop control is based on comparing real movement with expected movement and introducing required corrrections to minimize or cancel differences. They are applied gains in the way of corrections for position and/or tolerance to remove those differences. Position error is evaluated as the difference, in each point, between theoretical movemment (calculated in the kinematic analysis) and the real movement achieved for each machining force and for an specific gain. Finally, the position error obtained for each machining force and gains are mapped, giving a chart with the best accuracy that the robot can give for each operation that has been requested and which conditions must be provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of the Laser MegaJoule facility within the shock ignition scheme has been considered. In the first part of the study, one-dimensional hydrodynamic calculations were performed for an inertial confinement fusion capsule in the context of the shock ignition scheme providing the energy gain and an estimation of the increase of the peak power due to the reduction of the photon penetration expected during the high-intensity spike pulse. In the second part, we considered a Laser MegaJoule configuration consisting of 176 laser beams that have been grouped providing two different irradiation schemes. In this configuration the maximum available energy and power are 1.3 MJ and 440 TW. Optimization of the laser?capsule parameters that minimize the irradiation non-uniformity during the first few ns of the foot pulse has been performed. The calculations take into account the specific elliptical laser intensity profile provided at the Laser MegaJoule and the expected beam uncertainties. A significant improvement of the illumination uniformity provided by the polar direct drive technique has been demonstrated. Three-dimensional hydrodynamic calculations have been performed in order to analyse the magnitude of the azimuthal component of the irradiation that is neglected in twodimensional hydrodynamic simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The city of Lorca (Spain) was hit on May 11th, 2011, by two consecutive earth-quakes of magnitudes 4.6 and 5.2 Mw, causing casualties and important damage in buildings. Many of the damaged structures were reinforced concrete frames with wide beams. This study quantifies the expected level of damage on this structural type in the case of the Lorca earth-quake by means of a seismic index Iv that compares the energy input by the earthquake with the energy absorption/dissipation capacity of the structure. The prototype frames investigated represent structures designed in two time periods (1994–2002 and 2003–2008), in which the applicable codes were different. The influence of the masonry infill walls and the proneness of the frames to concentrate damage in a given story were further investigated through nonlinear dynamic response analyses. It is found that (1) the seismic index method predicts levels of damage that range from moderate/severe to complete collapse; this prediction is consistent with the observed damage; (2) the presence of masonry infill walls makes the structure very prone to damage concentration and reduces the overall seismic capacity of the building; and (3) a proper hierarchy of strength between beams and columns that guarantees the formation of a strong column-weak beam mechanism (as prescribed by seismic codes), as well as the adoption of counter-measures to avoid the negative interaction between non-structural infill walls and the main frame, would have reduced the level of damage from Iv=1 (collapse) to about Iv=0.5 (moderate/severe damage)