982 resultados para Inverse method
Resumo:
Kinematic redundancy occurs when a manipulator possesses more degrees of freedom than those required to execute a given task. Several kinematic techniques for redundant manipulators control the gripper through the pseudo-inverse of the Jacobian, but lead to a kind of chaotic inner motion with unpredictable arm configurations. Such algorithms are not easy to adapt to optimization schemes and, moreover, often there are multiple optimization objectives that can conflict between them. Unlike single optimization, where one attempts to find the best solution, in multi-objective optimization there is no single solution that is optimum with respect to all indices. Therefore, trajectory planning of redundant robots remains an important area of research and more efficient optimization algorithms are needed. This paper presents a new technique to solve the inverse kinematics of redundant manipulators, using a multi-objective genetic algorithm. This scheme combines the closed-loop pseudo-inverse method with a multi-objective genetic algorithm to control the joint positions. Simulations for manipulators with three or four rotational joints, considering the optimization of two objectives in a workspace without and with obstacles are developed. The results reveal that it is possible to choose several solutions from the Pareto optimal front according to the importance of each individual objective.
Resumo:
This work reports on the experimental and numerical study of the bending behaviour of two-dimensional adhesively-bonded scarf repairs of carbon-epoxy laminates, bonded with the ductile adhesive Araldite 2015®. Scarf angles varying from 2 to 45º were tested. The experimental work performed was used to validate a numerical Finite Element analysis using ABAQUS® and a methodology developed by the authors to predict the strength of bonded assemblies. This methodology consists on replacing the adhesive layer by cohesive elements, including mixed-mode criteria to deal with the mixed-mode behaviour usually observed in structures. Trapezoidal laws in pure modes I and II were used to account for the ductility of the adhesive used. The cohesive laws in pure modes I and II were determined with Double Cantilever Beam and End-Notched Flexure tests, respectively, using an inverse method. Since in the experiments interlaminar and transverse intralaminar failures of the carbon-epoxy components also occurred in some regions, cohesive laws to simulate these failure modes were also obtained experimentally with a similar procedure. A good correlation with the experiments was found on the elastic stiffness, maximum load and failure mode of the repairs, showing that this methodology simulates accurately the mechanical behaviour of bonded assemblies.
Resumo:
An experimental and Finite Element study was performed on the bending behaviour of wood beams of the Pinus Pinaster species repaired with adhesively-bonded carbon–epoxy patches, after sustaining damage by cross-grain failure. This damage is characterized by crack growth at a small angle to the beams longitudinal axis, due to misalignment between the wood fibres and the beam axis. Cross-grain failure can occur in large-scale in a wood member when trees that have grown spirally or with a pronounced taper are cut for lumber. Three patch lengths were tested. The simulations include the possibility of cohesive fracture of the adhesive layer, failure within the wood beam in two propagation planes and patch interlaminar failure, by the use of cohesive zone modelling. The respective cohesive properties were estimated either by an inverse method or from the literature. The comparison with the tests allowed the validation of the proposed methodology, opening a good perspective for the reduction of costs in the design stages of these repairs due to extensive experimentation.
Resumo:
The purpose of this work was to design and carry out thermal-hydraulic experiments dealing with overcooling transients of a VVER-440-type nuclear reactor pressure vessel. Sudden overcooling accident could have negative effect on the mechanical strength of the pressure vessel. If part of the pressure vessel is compromised, the intense pressure inside a pressurized water reactor could cause the wall to fracture. Information on the heat transfer along the outside of the pressure vessel wall is necessary for stress analysis. Basic knowledge of the overcooling accident and heat transfer types on the outside of the pressure vessel is presented as background information. Test facility was designed and built based to study and measure heat transfer during specific overcooling scenarios. Two test series were conducted with the first one concentrating on the very beginning of the transient and the second one concentrating on steady state heat transfer. Heat transfer coefficients are calculated from the test data using an inverse method, which yields better results in fast transients than direct calculation from the measurement results. The results show that heat transfer rate varies considerably during the transient, being very high in the beginning and dropping to steady state in a few minutes. The test results show that appropriate correlations can be used in future analysis.
Resumo:
In this article, a methodology is used for the simultaneous determination of the effective diffusivity and the convective mass transfer coefficient in porous solids, which can be considered as an infinite cylinder during drying. Two models are used for optimization and drying simulation: model 1 (constant volume and diffusivity, with equilibrium boundary condition), and model 2 (constant volume and diffusivity with convective boundary condition). Optimization algorithms based on the inverse method were coupled to the analytical solutions, and these solutions can be adjusted to experimental data of the drying kinetics. An application of optimization methodology was made to describe the drying kinetics of whole bananas, using experimental data available in the literature. The statistical indicators enable to affirm that the solution of diffusion equation with convective boundary condition generates results superior than those with the equilibrium boundary condition.
Resumo:
ABSTRACT This paper aims at describing the osmotic dehydration of radish cut into cylindrical pieces, using one- and two-dimensional analytical solutions of diffusion equation with boundary conditions of the first and third kind. These solutions were coupled with an optimizer to determine the process parameters, using experimental data. Three models were proposed to describe the osmotic dehydration of radish slices in brine at low temperature. The two-dimensional model with boundary condition of the third kind well described the kinetics of mass transfers, and it enabled prediction of moisture and solid distributions at any given time.
Resumo:
Measurements of anthropogenic tracers such as chlorofluorocarbons and tritium must be quantitatively combined with ocean general circulation models as a component of systematic model development. The authors have developed and tested an inverse method, using a Green's function, to constrain general circulation models with transient tracer data. Using this method chlorofluorocarbon-11 and -12 (CFC-11 and -12) observations are combined with a North Atlantic configuration of the Miami Isopycnic Coordinate Ocean Model with 4/3 degrees resolution. Systematic differences can be seen between the observed CFC concentrations and prior CFC fields simulated by the model. These differences are reduced by the inversion, which determines the optimal gas transfer across the air-sea interface, accounting for uncertainties in the tracer observations. After including the effects of unresolved variability in the CFC fields, the model is found to be inconsistent with the observations because the model/data misfit slightly exceeds the error estimates. By excluding observations in waters ventilated north of the Greenland-Scotland ridge (sigma (0) < 27.82 kg m(-3); shallower than about 2000 m), the fit is improved, indicating that the Nordic overflows are poorly represented in the model. Some systematic differences in the model/data residuals remain and are related, in part, to excessively deep model ventilation near Rockall and deficient ventilation in the main thermocline of the eastern subtropical gyre. Nevertheless, there do not appear to be gross errors in the basin-scale model circulation. Analysis of the CFC inventory using the constrained model suggests that the North Atlantic Ocean shallower than about 2000 m was near 20% saturated in the mid-1990s. Overall, this basin is a sink to 22% of the total atmosphere-to-ocean CFC-11 flux-twice the global average value. The average water mass formation rates over the CFC transient are 7.0 and 6.0 Sv (Sv = 10(6) m(3) s(-1)) for subtropical mode water and subpolar mode water, respectively.
Resumo:
We present a method of simulating both the avalanche and surge components of pyroclastic flows generated by lava collapsing from a growing Pelean dome. This is used to successfully model the pyroclastic flows generated on 12 May 1996 by the Soufriere Hills volcano, Montserrat. In simulating the avalanche component we use a simple 3-fold parameterisation of flow acceleration for which we choose values using an inverse method. The surge component is simulated by a 1D hydraulic balance of sedimentation of clasts and entrainment of air away from the avalanche source. We show how multiple simulations based on uncertainty of the starting conditions and parameters, specifically location and size (mass flux), could be used to map hazard zones.
Resumo:
Cellulose is the major constituent of most plants of interest as renewable sources of energy and is the most extensively studied form of biomass or biomass constituent. Predicting the mass loss and product yields when cellulose is subjected to increased temperature represents a fundamental problem in the thermal release of biomass energy. Unfortunately, at this time, there is no internally consistent model of cellulose pyrolysis that can organize the varied experimental data now available or provide a guide for additional experiments. Here, we present a model of direct cellulose pyrolysis using a multistage decay scheme that we first presented in the IJQC in 1984. This decay scheme can, with the help of an inverse method of assigning reaction rates, provide a reasonable account of the direct fast pyrolysis yield measurements. The model is suggestive of dissociation states of d-glucose (C6H10O5,), the fundamental cellulose monomer. The model raises the question as to whether quantum chemistry could now provide the dissociation energies for the principal breakup modes of glucose into C-1, C-2, C-3, C-4, and C-5 compounds. These calculations would help in achieving a more fundamental description of volatile generation from cellulose pyrolysis and could serve as a guide for treating hemicellulose and lignin, the other major biomass constituents. Such advances could lead to the development of a predictive science of biomass pyrolysis that would facilitate the design of liquifiers and gasifiers based upon renewable feedstocks. (C) 1998 John Wiley & Sons, Inc.
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
A climatological field is a mean gridded field that represents the monthly or seasonal trend of an ocean parameter. This instrument allows to understand the physical conditions and physical processes of the ocean water and their impact on the world climate. To construct a climatological field, it is necessary to perform a climatological analysis on an historical dataset. In this dissertation, we have constructed the temperature and salinity fields on the Mediterranean Sea using the SeaDataNet 2 dataset. The dataset contains about 140000 CTD, bottles, XBT and MBT profiles, covering the period from 1900 to 2013. The temperature and salinity climatological fields are produced by the DIVA software using a Variational Inverse Method and a Finite Element numerical technique to interpolate data on a regular grid. Our results are also compared with a previous version of climatological fields and the goodness of our climatologies is assessed, according to the goodness criteria suggested by Murphy (1993). Finally the temperature and salinity seasonal cycle for the Mediterranean Sea is described.
Resumo:
A new deep ice core drilling program, TALDICE, has been successfully handled by a European team at Talos Dome, in the Ross Sea sector of East Antarctica, down to 1620 m depth. Using stratigraphic markers and a new inverse method, we produce the first official chronology of the ice core, called TALDICE-1. We show that it notably improves an a priori chronology resulting from a one-dimensional ice flow model. It is in agreement with a posteriori controls of the resulting accumulation rate and thinning function along the core. An absolute uncertainty of only 300 yr is obtained over the course of the last deglaciation. This uncertainty remains lower than 600 yr over Marine Isotope Stage 3, back to 50 kyr BP. The phasing of the TALDICE ice core climate record with respect to the central East Antarctic plateau and Greenland records can thus be determined with a precision allowing for a discussion of the mechanisms at work at sub-millennial time scales.
Resumo:
Prevention and treatment of osteoporosis rely on understanding of the micromechanical behaviour of bone and its influence on fracture toughness and cell-mediated adaptation processes. Postyield properties may be assessed by nonlinear finite element simulations of nanoindentation using elastoplastic and damage models. This computational study aims at determining the influence of yield surface shape and damage on the depth-dependent response of bone to nanoindentation using spherical and conical tips. Yield surface shape and damage were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic-to-total work ratio is well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not statistically significant (p<0.0001). For spherical tips, damage was not a significant parameter (p<0.0001). The gained knowledge can be used for developing an inverse method for identification of postelastic properties of bone from nanoindentation.
Resumo:
En este trabajo se presenta el desarrollo de una metodología para obtener un universo de funciones de Green y el algoritmo correspondiente, para estimar la altura de tsunamis a lo largo de la costa occidental de México en función del momento sísmico y de la extensión del área de ruptura de sismos interplaca localizados entre la costa y la Trinchera Mesoamericana. Tomando como caso de estudio el sismo ocurrido el 9 de octubre de 1995 en la costa de Jalisco-Colima, se estudiaron los efectos del tsunami originados en la hidrodinámica del Puerto de Manzanillo, México, con una propuesta metodológica que contempló lo siguiente: El primer paso de la metodología contempló la aplicación del método inverso de tsunamis para acotar los parámetros de la fuente sísmica mediante la confección de un universo de funciones de Green para la costa occidental de México. Tanto el momento sísmico como la localización y extensión del área de ruptura de sismos se prescribe en segmentos de planos de falla de 30 X 30 km. A cada uno de estos segmentos del plano de falla corresponde un conjunto de funciones de Green ubicadas en la isobata de 100 m, para 172 localidades a lo largo de la costa, separadas en promedio 12 km entre una y otra. El segundo paso de la metodología contempló el estudio de la hidrodinámica (velocidades de las corrientes y niveles del mar en el interior del puerto y el estudio del runup en la playa) originada por el tsunami, la cual se estudió en un modelo hidráulico de fondo fijo y en un modelo numérico, representando un tsunami sintético en la profundidad de 34 m como condición inicial, el cual se propagó a la costa con una señal de onda solitaria. Como resultado de la hidrodinámica del puerto de Manzanillo, se realizó un análisis de riesgo para la definición de las condiciones operativas del puerto en términos de las velocidades en el interior del mismo, y partiendo de las condiciones iniciales del terremoto de 1995, se definieron las condiciones límites de operación de los barcos en el interior y exterior del puerto. In this work is presented the development of a methodology in order to obtain a universe of Green's functions and the corresponding algorithm in order to estimate the tsunami wave height along the west coast of Mexico, in terms of seismic moment and the extent of the area of the rupture, in the interplate earthquakes located between the coast and the Middle America Trench. Taking as a case of study the earthquake occurred on October 9, 1995 on the coast of Jalisco-Colima, were studied the hydrodynamics effects of the tsunami caused in the Port of Manzanillo, Mexico, with a methodology that contemplated the following The first step of the methodology contemplated the implementation of the tsunami inverse method to narrow the parameters of the seismic source through the creation of a universe of Green's functions for the west coast of Mexico. Both the seismic moment as the location and extent of earthquake rupture area prescribed in segments fault planes of 30 X 30 km. Each of these segments of the fault plane corresponds a set of Green's functions located in the 100 m isobath, to 172 locations along the coast, separated on average 12 km from each other. The second step of the methodology contemplated the study of the hydrodynamics (speed and directions of currents and sea levels within the port and the study of the runup on the beach Las Brisas) caused by the tsunami, which was studied in a hydraulic model of fix bed and in a numerical model, representing a synthetic tsunami in the depth of 34 m as an initial condition which spread to the coast with a solitary wave signal. As a result of the hydrodynamics of the port of Manzanillo, a risk analysis to define the operating conditions of the port in terms of the velocities in the inner and outside of the port was made, taken in account the initial conditions of the earthquake and tsunami ocurred in Manzanillo port in 1995, were defined the limits conditions of operation of the ships inside and outside the port.
Resumo:
Desenvolve-se um método para estimar os parâmetros de uma rede hidráulica a partir de dados observados de cargas hidráulicas transientes. Os parâmetros físicos da rede como fatores de atrito, rugosidades absolutas, diâmetros e a identificação e quantificação de vazamentos são as grandezas desconhecidas. O problema transiente inverso é resolvido utilizando uma abordagem indireta que compara os dados disponíveis de carga hidráulica transiente observados com os calculados através de um método matemático. O Método Transiente Inverso (MTI) com um Algoritmo Genético (AG) emprega o Método das Características (MOC) na solução das equações do movimento para escoamento transiente em redes de tubos. As condições de regime permanente são desconhecidas. Para avaliar a confiabilidade do MTI-AG desenvolvido aqui, uma rede-exemplo é usada para os vários problemas de calibração propostos. O comportamento transiente é imposto por duas manobras distintas de uma válvula de controle localizada em um dos nós da rede. Analisam-se, ainda, o desempenho do método proposto mediante a variabilidade do tamanho do registro transiente e de possíveis erros de leitura nas cargas hidráulicas. Ensaios numéricos realizados mostram que o método é viável e aplicável à solução de problema inverso em redes hidráulicas, sobretudo recorrendo-se a poucos dados observados e ao desconhecimento das condições iniciais de estado permanente. Nos diversos problemas de identificação, as informações transientes obtidas da manobra mais brusca produziu estimações mais eficientes.