916 resultados para Global problem
Resumo:
O método de empilhamento por Superfície de Reflexão Comum (SRC) produz seções simuladas de afastamento nulo (AN) por meio do somatório de eventos sísmicos dos dados de cobertura múltipla contidos nas superfícies de empilhamento. Este método não depende do modelo de velocidade do meio, apenas requer o conhecimento a priori da velocidade próxima a superfície. A simulação de seções AN por este método de empilhamento utiliza uma aproximação hiperbólica de segunda ordem do tempo de trânsito de raios paraxiais para definir a superfície de empilhamento ou operador de empilhamento SRC. Para meios 2D este operador depende de três atributos cinemáticos de duas ondas hipotéticas (ondas PIN e N), observados no ponto de emergência do raio central com incidência normal, que são: o ângulo de emergência do raio central com fonte-receptor nulo (β0) , o raio de curvatura da onda ponto de incidência normal (RPIN) e o raio de curvatura da onda normal (RN). Portanto, o problema de otimização no método SRC consiste na determinação, a partir dos dados sísmicos, dos três parâmetros (β0, RPIN, RN) ótimos associados a cada ponto de amostragem da seção AN a ser simulada. A determinação simultânea destes parâmetros pode ser realizada por meio de processos de busca global (ou otimização global) multidimensional, utilizando como função objetivo algum critério de coerência. O problema de otimização no método SRC é muito importante para o bom desempenho no que diz respeito a qualidade dos resultados e principalmente ao custo computacional, comparado com os métodos tradicionalmente utilizados na indústria sísmica. Existem várias estratégias de busca para determinar estes parâmetros baseados em buscas sistemáticas e usando algoritmos de otimização, podendo estimar apenas um parâmetro de cada vez, ou dois ou os três parâmetros simultaneamente. Levando em conta a estratégia de busca por meio da aplicação de otimização global, estes três parâmetros podem ser estimados através de dois procedimentos: no primeiro caso os três parâmetros podem ser estimados simultaneamente e no segundo caso inicialmente podem ser determinados simultaneamente dois parâmetros (β0, RPIN) e posteriormente o terceiro parâmetro (RN) usando os valores dos dois parâmetros já conhecidos. Neste trabalho apresenta-se a aplicação e comparação de quatro algoritmos de otimização global para encontrar os parâmetros SRC ótimos, estes são: Simulated Annealing (SA), Very Fast Simulated Annealing (VFSA), Differential Evolution (DE) e Controlled Rando Search - 2 (CRS2). Como resultados importantes são apresentados a aplicação de cada método de otimização e a comparação entre os métodos quanto a eficácia, eficiência e confiabilidade para determinar os melhores parâmetros SRC. Posteriormente, aplicando as estratégias de busca global para a determinação destes parâmetros, por meio do método de otimização VFSA que teve o melhor desempenho foi realizado o empilhamento SRC a partir dos dados Marmousi, isto é, foi realizado um empilhamento SRC usando dois parâmetros (β0, RPIN) estimados por busca global e outro empilhamento SRC usando os três parâmetros (β0, RPIN, RN) também estimados por busca global.
Resumo:
This paper aims to contribute to the three-dimensional generalization of numerical prediction of crack propagation through the formulation of finite elements with embedded discontinuities. The analysis of crack propagation in two-dimensional problems yields lines of discontinuity that can be tracked in a relatively simple way through the sequential construction of straight line segments oriented according to the direction of failure within each finite element in the solid. In three-dimensional analysis, the construction of the discontinuity path is more complex because it requires the creation of plane surfaces within each element, which must be continuous between the elements. In the method proposed by Chaves (2003) the crack is determined by solving a problem analogous to the heat conduction problem, established from local failure orientations, based on the stress state of the mechanical problem. To minimize the computational effort, in this paper a new strategy is proposed whereby the analysis for tracking the discontinuity path is restricted to the domain formed by some elements near the crack surface that develops along the loading process. The proposed methodology is validated by performing three-dimensional analyses of basic problems of experimental fractures and comparing their results with those reported in the literature.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The main feature of partition of unity methods such as the generalized or extended finite element method is their ability of utilizing a priori knowledge about the solution of a problem in the form of enrichment functions. However, analytical derivation of enrichment functions with good approximation properties is mostly limited to two-dimensional linear problems. This paper presents a procedure to numerically generate proper enrichment functions for three-dimensional problems with confined plasticity where plastic evolution is gradual. This procedure involves the solution of boundary value problems around local regions exhibiting nonlinear behavior and the enrichment of the global solution space with the local solutions through the partition of unity method framework. This approach can produce accurate nonlinear solutions with a reduced computational cost compared to standard finite element methods since computationally intensive nonlinear iterations can be performed on coarse global meshes after the creation of enrichment functions properly describing localized nonlinear behavior. Several three-dimensional nonlinear problems based on the rate-independent J (2) plasticity theory with isotropic hardening are solved using the proposed procedure to demonstrate its robustness, accuracy and computational efficiency.
Resumo:
Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.
Resumo:
A transmission problem involving two Euler-Bernoulli equations modeling the vibrations of a composite beam is studied. Assuming that the beam is clamped at one extremity, and resting on an elastic bearing at the other extremity, the existence of a unique global solution and decay rates of the energy are obtained by adding just one damping device at the end containing the bearing mechanism.
Resumo:
Modern food production is a complex, globalized system in which what we eat and how it is produced are increasingly disconnected. This thesis examines some of the ways in which global trade has changed the mix of inputs to food and feed, and how this affects food security and our perceptions of sustainability. One useful indicator of the ecological impact of trade in food and feed products is the Appropriated Ecosystem Areas (ArEAs), which estimates the terrestrial and aquatic areas needed to produce all the inputs to particular products. The method is introduced in Paper I and used to calculate and track changes in imported subsidies to Swedish agriculture over the period 1962-1994. In 1994, Swedish consumers needed agricultural areas outside their national borders to satisfy more than a third of their food consumption needs. The method is then applied to Swedish meat production in Paper II to show that the term “Made in Sweden” is often a misnomer. In 1999, almost 80% of manufactured feed for Swedish pigs, cattle and chickens was dependent on imported inputs, mainly from Europe, Southeast Asia and South America. Paper III examines ecosystem subsidies to intensive aquaculture in two nations: shrimp production in Thailand and salmon production in Norway. In both countries, aquaculture was shown to rely increasingly on imported subsidies. The rapid expansion of aquaculture turned these countries from fishmeal net exporters to fishmeal net importers, increasingly using inputs from the Southeastern Pacific Ocean. As the examined agricultural and aquacultural production systems became globalized, levels of dependence on other nations’ ecosystems, the number of external supply sources, and the distance to these sources steadily increased. Dependence on other nations is not problematic, as long as we are able to acknowledge these links and sustainably manage resources both at home and abroad. However, ecosystem subsidies are seldom recognized or made explicit in national policy or economic accounts. Economic systems are generally not designed to receive feedbacks when the status of remote ecosystems changes, much less to respond in an ecologically sensitive manner. Papers IV and V discuss the problem of “masking” of the true environmental costs of production for trade. One of our conclusions is that, while the ArEAs approach is a useful tool for illuminating environmentally-based subsidies in the policy arena, it does not reflect all of the costs. Current agricultural and aquacultural production methods have generated substantial increases in production levels, but if policy continues to support the focus on yield and production increases alone, taking the work of ecosystems for granted, vulnerability can result. Thus, a challenge is to develop a set of complementary tools that can be used in economic accounting at national and international scales that address ecosystem support and performance. We conclude that future resilience in food production systems will require more explicit links between consumers and the work of supporting ecosystems, locally and in other regions of the world, and that food security planning will require active management of the capacity of all involved ecosystems to sustain food production.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
The paper deals with the problem of (the often supposedly impossible) conversion to “Hinduism”. I start with an outline of what I call the ‘no conversion possible’ paradigm, and briefl y point to the lack of refl ection on acceptance of converts in most theories of religious conversion. Then, two examples are presented: Firstly, I consider conversion to ISKCON and the discourse on the Hare Krishna movement’s Hinduness. Secondly, I give a brief outline of the globalsanatana dharmamovement as inaugurated by Satguru Siva Subramuniyaswami, a converted American Hindu based in Hawai’i. In the conclusion, I refl ect on (civic) social capital and engagement in global networks as a means to gain acceptance as converts to Hinduism. I argue in line with Stepick, Rey and Mahler (2009) that the religious movements’ civic engagement (in these cases engagement in favour of the Indian diasporic communities and of Hindus in India) provides a means for the individual, non-Indian converts to acquire the social capital that is necessary for gaining acceptance as ‘Hindus’ in certain contexts.
Resumo:
Nicotine addiction is a major public health problem, resulting in primary glutamatergic dysfunction. We measured the glutamate receptor binding in the human brain and provided direct evidence for the abnormal glutamate system in smokers. Because antagonism of the metabotropic glutamate receptor 5 (mGluR5) reduced nicotine self-administration in rats and mice, mGluR5 is suggested to be involved in nicotine addiction. mGluR5 receptor binding specifically to an allosteric site was observed by using positron emission tomography with [(11)C]ABP688. We found a marked global reduction (20.6%; P < 0.0001) in the mGluR5 distribution volume ratio (DVR) in the gray matter of 14 smokers. The most prominent reductions were found in the bilateral medial orbitofrontal cortex. Compared with 14 nonsmokers, 14 ex-smokers had global reductions in the average gray matter mGluR5 DVR (11.5%; P < 0.005), and there was a significant difference in average gray matter mGluR5 DVR between smokers and ex-smokers (9.2%; P < 0.01). Clinical variables reflecting current nicotine consumption, dependence and abstinence were not correlated with mGluR5 DVR. This decrease in mGluR5 receptor binding may be an adaptation to chronic increases in glutamate induced by chronic nicotine administration, and the decreased down-regulation seen in the ex-smokers could be due to incomplete recovery of the receptors, especially because the ex-smokers were abstinent for only 25 wk on average. These results encourage the development and testing of drugs against addiction that directly target the glutamatergic system.
Resumo:
More than 1 billion people lack access to clean water and proper sanitation. As part of efforts to solve this problem, there is a growing shift from public to private water management led by The World Bank and the International Monetary Fund (IMF). This shift has inspired much related research. Researchers have assessed water privatization related perceptions of consumers, government officials, and multinational company agents. This thesis presents results of a study of nongovernmental (NGO) staff perceptions of water privatization. Although NGOs are important actors in sustainable water related development through water provision, we have little understanding of their perceptions of water privatization and how it impacts their activities. My goal was to fill this gap. I sampled international and national development NGOs with water, sanitation, and hygiene (WASH) foci. I conducted 28 interviews between January and June of 2011 with staff in key positions including water policy analysts, program officers, and project coordinators. Their perceptions of water privatization were mixed. I also found that local water privatization in most cases does not influence NGO decisions to conduct projects in a region. I found that development NGO staff base their beliefs about water privatization on a mix of personal experience and media coverage. My findings have important implications for the WASH sector as we work to solve the worsening global water access crisis.
Resumo:
Since it is very toxic and accumulates in organisms, particularly in fish, mercury is a very important pollutant and one of the most studies. And this concern over the toxicity and human health risks of mercury has prompted efforts to regulate anthropogenic emissions. As mercury pollution problem is getting increasingly serious, we are curious about how serious this problem will be in the future. What is more, how the climate change in the future will affect the mercury concentration in the atmosphere. So we investigate the impact of climate change on mercury concentration in the atmosphere. We focus on the comparison between the mercury data for year 2000 and for year 2050. The GEOS-Chem model shows that the mercury concentrations for all tracers (1 to 3), elemental mercury (Hg(0)), divalent mercury (Hg(II)) and primary particulate mercury (Hg(P)) have differences between 2000 and 2050 in most regions over the world. From the model results, we can see the climate change from 2000 to 2050 would decrease Hg(0) surface concentration in most of the world. The driving factors of Hg(0) surface concentration changes are natural emissions(ocean and vegetation) and the transformation reactions between Hg(0) and Hg(II). The climate change from 2000 to 2050 would increase Hg(II) surface concentration in most of mid-latitude continental parts of the world while decreasing Hg(II) surface concentration in most of high-latitude part of the world. The driving factors of Hg(II) surface concentration changes is deposition amount change (majorly wet deposition) from 2000 to 2050 and the transformation reactions between Hg(0) and Hg(II). Climate change would increase Hg(P) concentration in most of mid-latitude area of the world and meanwhile decrease Hg(P) concentration in most of high-latitude regions of the world. For the Hg(P) concentration changes, the major driving factor is the deposition amount change (mainly wet deposition) from 2000 to 2050.
Resumo:
Taking up the thesis of Dipesh Chakrabarty (2009) that human history (including cultural history) on the one hand and natural history on the other must be brought into conversation more than has been done so in the past, this presentation will focus more closely on the significance and the impact of global climatic conditions and pests on the negotiations that Australian Prime Minister William Morris Hughes carried on with the British government between March and November 1916. Whereas Australia had been able to sell most of its produce in 1914 and 1915 the situation looked more serious in 1916, not least due to the growing shortage in shipping. It was therefore imperative for the Australian government to find a way to solve this problem, not least because it wanted to keep up its own war effort at the pace it had been going so far. In this context intentions to make or press ahead with a contribution to a war perceived to be more total those of the past interacted with natural phenomena such as the declining harvest in many parts of the world in 1916 as a consequence of climatic conditions as well as pests in many parts of the world.