896 resultados para deduced optical model parameters


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on the microscopic observations and measurements, the mechanical behavior of the surface-nanocrystallized Al-alloy material at microscale is investigated experimentally and theoretically. In the experimental research, the compressive stress-strain curves and the hardness depth curves are measured. In the theoretical simulation, based on the material microstructure characteristics and the experimental features of the compression and indentation, the microstructure cell models are developed and the strain gradient plasticity theory is adopted. The material compressive stress-strain curves and the hardness depth curves-are predicted and simulated. Through comparison of the experimental results with the simulation results, the material and model parameters are determined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation is concerned with the problem of determining the dynamic characteristics of complicated engineering systems and structures from the measurements made during dynamic tests or natural excitations. Particular attention is given to the identification and modeling of the behavior of structural dynamic systems in the nonlinear hysteretic response regime. Once a model for the system has been identified, it is intended to use this model to assess the condition of the system and to predict the response to future excitations.

A new identification methodology based upon a generalization of the method of modal identification for multi-degree-of-freedom dynaimcal systems subjected to base motion is developed. The situation considered herein is that in which only the base input and the response of a small number of degrees-of-freedom of the system are measured. In this method, called the generalized modal identification method, the response is separated into "modes" which are analogous to those of a linear system. Both parametric and nonparametric models can be employed to extract the unknown nature, hysteretic or nonhysteretic, of the generalized restoring force for each mode.

In this study, a simple four-term nonparametric model is used first to provide a nonhysteretic estimate of the nonlinear stiffness and energy dissipation behavior. To extract the hysteretic nature of nonlinear systems, a two-parameter distributed element model is then employed. This model exploits the results of the nonparametric identification as an initial estimate for the model parameters. This approach greatly improves the convergence of the subsequent optimization process.

The capability of the new method is verified using simulated response data from a three-degree-of-freedom system. The new method is also applied to the analysis of response data obtained from the U.S.-Japan cooperative pseudo-dynamic test of a full-scale six-story steel-frame structure.

The new system identification method described has been found to be both accurate and computationally efficient. It is believed that it will provide a useful tool for the analysis of structural response data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.

In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.

In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.

In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented.  The variables are used  in the analysis of the first proton-proton collisions dataset at CMS  (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results.  A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011.  The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content.   The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with  gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.

With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Bayesian probabilistic methodology for on-line structural health monitoring which addresses the issue of parameter uncertainty inherent in problem is presented. The method uses modal parameters for a limited number of modes identified from measurements taken at a restricted number of degrees of freedom of a structure as the measured structural data. The application presented uses a linear structural model whose stiffness matrix is parameterized to develop a class of possible models. Within the Bayesian framework, a joint probability density function (PDF) for the model stiffness parameters given the measured modal data is determined. Using this PDF, the marginal PDF of the stiffness parameter for each substructure given the data can be calculated.

Monitoring the health of a structure using these marginal PDFs involves two steps. First, the marginal PDF for each model parameter given modal data from the undamaged structure is found. The structure is then periodically monitored and updated marginal PDFs are determined. A measure of the difference between the calibrated and current marginal PDFs is used as a means to characterize the health of the structure. A procedure for interpreting the measure for use by an expert system in on-line monitoring is also introduced.

The probabilistic framework is developed in order to address the model parameter uncertainty issue inherent in the health monitoring problem. To illustrate this issue, consider a very simplified deterministic structural health monitoring method. In such an approach, the model parameters which minimize an error measure between the measured and model modal values would be used as the "best" model of the structure. Changes between the model parameters identified using modal data from the undamaged structure and subsequent modal data would be used to find the existence, location and degree of damage. Due to measurement noise, limited modal information, and model error, the "best" model parameters might vary from one modal dataset to the next without any damage present in the structure. Thus, difficulties would arise in separating normal variations in the identified model parameters based on limitations of the identification method and variations due to true change in the structure. The Bayesian framework described in this work provides a means to handle this parametric uncertainty.

The probabilistic health monitoring method is applied to simulated data and laboratory data. The results of these tests are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic rupture simulations are unique in their contributions to the study of earthquake physics. The current rapid development of dynamic rupture simulations poses several new questions: Do the simulations reflect the real world? Do the simulations have predictive power? Which one should we believe when the simulations disagree? This thesis illustrates how integration with observations can help address these questions and reduce the effects of non-uniqueness of both dynamic rupture simulations and kinematic inversion problems. Dynamic rupture simulations with observational constraints can effectively identify non-physical features inferred from observations. Moreover, the integrative technique can also provide more physical insights into the mechanisms of earthquakes. This thesis demonstrates two examples of such kinds of integration: dynamic rupture simulations of the Mw 9.0 2011 Tohoku-Oki earthquake and of earthquake ruptures in damaged fault zones:

(1) We develop simulations of the Tohoku-Oki earthquake based on a variety of observations and minimum assumptions of model parameters. The simulations provide realistic estimations of stress drop and fracture energy of the region and explain the physical mechanisms of high-frequency radiation in the deep region. We also find that the overridding subduction wedge contributes significantly to the up-dip rupture propagation and large final slip in the shallow region. Such findings are also applicable to other megathrust earthquakes.

(2) Damaged fault zones are usually found around natural faults, but their effects on earthquake ruptures have been largely unknown. We simulate earthquake ruptures in damaged fault zones with material properties constrained by seismic and geological observations. We show that reflected waves in fault zones are effective at generating pulse-like ruptures and head waves tend to accelerate and decelerate rupture speeds. These mechanisms are robust in natural fault zones with large attenuation and off-fault plasticity. Moreover, earthquakes in damaged fault zones can propagate at super-Rayleigh speeds that are unstable in homogeneous media. Supershear transitions in fault zones do not require large fault stresses. In the end, we present observations in the Big Bear region, where variability of rupture speeds of small earthquakes correlates with the laterally variable materials in a damaged fault zone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.

In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.

To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.

From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O comportamento de fases para sistemas binários com um hidrocarboneto leve e um pesado é muito importante tanto para o projeto real de um processo quanto para o desenvolvimento de modelos teóricos. Para atender a crescente demanda por informação experimental de equilíbrio de fases a altas pressões, o objetivo deste estudo é obter uma metodologia que substitua parcialmente ou maximize a pouca informação experimental disponível. Para isto propõe-se a modelagem do equilíbrio de fases em misturas de hidrocarboneto leve com um pesado, sem o conhecimento da estrutura molecular do pesado, inferindo-se os parâmetros do modelo a partir da modelagem de dados de ponto de bolha obtidos na literatura. Esta metodologia implica não só na descrição do equilíbrio de fases de um sistema como na estimação das propriedades críticas do pesado, de difícil obtenção devido ao craqueamento destes a altas temperaturas. Neste contexto, este estudo apresenta uma estratégia que estima indiretamente as propriedades críticas dos compostos pesados. Para isto, foram correlacionados dados experimentais de ponto de bolha de misturas binárias contendo um hidrocarboneto leve e um pesado, usando-se dois modelos: o de Peng-Robinson e o TPT1M (Teoria da Polimerização Termodinâmica de primeira ordem de Wertheim modificada). Os parâmetros ajustados com o modelo de Peng-Robinson correspondem diretamente às propriedades críticas do composto pesado, enquanto os ajustados com o modelo TPT1M foram usados para obtê-las. Esta estratégia fornece parâmetros dependentes do modelo, porém permite o cálculo de outras propriedades termodinâmicas, como a extrapolação da temperatura dos dados estudados. Além disso, acredita-se que a correlação dos parâmetros obtidos com as propriedades críticas disponíveis ajudará na caracterização de frações pesadas de composição desconhecida

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.

As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.

Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.

Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.

In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.

Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.

The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.

The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.

Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cross sections for the reaction 12C(α,γ)16O have been measured for a range of center-of-mass alpha particle energies extending from 1.72 MeV to 2.94 MeV. Two 8"x5" NaI (Tℓ) crystals were used to detect gamma rays; time-of-flight technique was employed to suppress cosmic ray background and background due to neutrons arising mainly from the 13C(α,n)16O reaction. Angular distributions were measured at center-of-mass alpha energies of 2.18, 2.42, 2.56 and 2.83 MeV. Upper limits were placed on the amount of radiation cascading through the 6.92 or 7.12-MeV states in 16O. By means of theoretical fits to the measured electric dipole component of the total cross section, in which interference between the 1¯ states in 16O at 7.12 MeV and at 9.60 MeV is taken into account, it is possible to extract the dimensionless, reduced-alpha-width of the 7.12-MeV state in 16O. A three-level R-matrix parameterization of the data yields the width Θα,F2 = 0.14+0.10-0.08. A "hybrid" R-matrix-optical-model parameterization yields Θα,F2 = 0.11+0.11-0.07. This quantity is of crucial importance in determining the abundances of 12C and 16O at the end of helium burning in stars.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PART I

The total cross-section for the reaction 21Ne(α, n)24Mg has been measured in the energy range 1.49 Mev ≤ Ecm ≤ 2.6 Mev. The cross-section factor, S(O), for this reaction has been determined, by means of an optical model calculation, to be in the range 1.52 x 1012 mb-Mev to 2.67 x 1012 mb-Mev, for interaction radii in the range 5.0 fm to 6.6 fm. With S(O) ≈ 2 x 1012 mb-Mev, the reaction 21Ne(α, n)24Mg can produce a large enough neutron flux to be a significant astrophysical source of neutrons.

PART II

The reaction12C(3He, p)14N has been studied over the energy range 12 Mev ≤ Elab ≤ 18 Mev. Angular distributions of the proton groups leading to the lowest seven levels in 14N were obtained.

Distorted wave calculations, based on two-nucleon transfer theory, were performed, and were found to be reliable for obtaining the value of the orbital angular momentum transferred. The present work shows that such calculations do not yield unambiguous values for the spectroscopic factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nesta tese, o efeito magnetocalórico é estudado teoricamente partindo de um hamiltoniano modelo que leva em conta uma rede magnética formada por diversas sub-redes magnéticas acopladas. No hamiltoniano são consideradas as interações de troca, Zeeman e magnetoelástica. Primeiramente, o hamiltoniano é apresentado em sua forma generalizada para R sub-redes magnéticas e a influência dos parâmetros do modelo na temperatura de Nèel e na temperatura de compensação é analisada no sistema com duas sub-redes magnéticas. Encontramos que, dependendo dos parâmetros de troca, arranjos ferrimagnético, antiferromagnético e ferromagnéticos podem ser obtidos. O efeito magnetocalórico foi sistematicamente estudado para diversos arranjos possíveis, posteriormente foi estudado em compostos reais do tipo R3Fe5O12 (RIG), sistema formado por três sub-redes magnéticas. Retornando ao sistema com duas sub-redes magnéticas foi analisada a influência da interação magnetoelástica no efeito magnetocalórico nos arranjos ferrimagnéticos obtidos previamente. Aplicando este modelo para uma estrutura cúbica do tipo perovskita, estudamos o efeito magnetocalórico nos compostos EuZrO3 e EuTiO3. Uma metodologia para a obtenção da magnetização de uma amostra policristalina foi apresentada e ainda estudamos o efeito magnetocalórico anisotrópico de natureza antiferromagnética.