935 resultados para Dynamic compact thermal models
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).
Resumo:
Using the solutions of the gap equations of the magnetic-color-flavor-locked (MCFL) phase of paired quark matter in a magnetic field, and taking into consideration the separation between the longitudinal and transverse pressures due to the field-induced breaking of the spatial rotational symmetry, the equation of state of the MCFL phase is self-consistently determined. This result is then used to investigate the possibility of absolute stability, which turns out to require a field-dependent ""bag constant"" to hold. That is, only if the bag constant varies with the magnetic field, there exists a window in the magnetic field vs bag constant plane for absolute stability of strange matter. Implications for stellar models of magnetized (self-bound) strange stars and hybrid (MCFL core) stars are calculated and discussed.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
The study of the thermal behavior of complex packages as multichip modules (MCM¿s) is usually carried out by measuring the so-called thermal impedance response, that is: the transient temperature after a power step. From the analysis of this signal, the thermal frequency response can be estimated, and consequently, compact thermal models may be extracted. We present a method to obtain an estimate of the time constant distribution underlying the observed transient. The method is based on an iterative deconvolution that produces an approximation to the time constant spectrum while preserving a convenient convolution form. This method is applied to the obtained thermal response of a microstructure as analyzed by finite element method as well as to the measured thermal response of a transistor array integrated circuit (IC) in a SMD package.
Resumo:
Thermal energy storage (TES) can increase the thermal energy effieresa, of a process by reusing the waste heat from industrial process, solar energy or other sources. There are different ways to store thermal energy: by sensible heat, by latest heat, by sorption process or by chemical reaction. This thesrs provides a-state-of-the-art review of the experimental performance of TES systems based on solid gas sorption process and chemical reactions. The importance of theses processes is that provides a heat loss free storage system with a high energy density.
Resumo:
Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.
Resumo:
This thesis is composed of three articles with the subjects of macroeconomics and - nance. Each article corresponds to a chapter and is done in paper format. In the rst article, which was done with Axel Simonsen, we model and estimate a small open economy for the Canadian economy in a two country General Equilibrium (DSGE) framework. We show that it is important to account for the correlation between Domestic and Foreign shocks and for the Incomplete Pass-Through. In the second chapter-paper, which was done with Hedibert Freitas Lopes, we estimate a Regime-switching Macro-Finance model for the term-structure of interest rates to study the US post-World War II (WWII) joint behavior of macro-variables and the yield-curve. We show that our model tracks well the US NBER cycles, the addition of changes of regime are important to explain the Expectation Theory of the term structure, and macro-variables have increasing importance in recessions to explain the variability of the yield curve. We also present a novel sequential Monte-Carlo algorithm to learn about the parameters and the latent states of the Economy. In the third chapter, I present a Gaussian A ne Term Structure Model (ATSM) with latent jumps in order to address two questions: (1) what are the implications of incorporating jumps in an ATSM for Asian option pricing, in the particular case of the Brazilian DI Index (IDI) option, and (2) how jumps and options a ect the bond risk-premia dynamics. I show that jump risk-premia is negative in a scenario of decreasing interest rates (my sample period) and is important to explain the level of yields, and that gaussian models without jumps and with constant intensity jumps are good to price Asian options.
Resumo:
Reinterpretation of old heat flow data or use of new data and new techniques of detection of the temperature under the surface have conducted to new heat flow density values in some regions of the globe. The problem of ice melting in Greenland and Antarctica caught the public's attention to the importance of knowledge on heat flow values and thermal structure of the globe. In the last years, several models were presented trying to obtain lithosphere and Moho thickness of the Iberia Peninsula. The work we intend to present is related with the SW part of the Iberia Peninsula ( south of the Ossa Morena zone, South Portuguese Zone and Algarve). The results obtained show a decrease in the thickness of the crust and the lithosphere in this region. Density anomalies in the crust are also referred. I intend to make the connection between the results of these models and the heat flow thermal conductivity, heat production and geological data available for the region, trying to explain the results of heat flow density data obtained.
Resumo:
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.
Resumo:
Altough nowadays DMTA is one of the most used techniques to characterize polymers thermo-mechanical behaviour, it is only effective for small amplitude oscillatory tests and limited to a single frequency analysis (linear regime). In this thesis work a Fourier transform based experimental system has proven to give hint on structural and chemical changes in specimens during large amplitude oscillatory tests exploiting multi frequency spectral analysis turning out in a more sensitive tool than classical linear approach. The test campaign has been focused on three test typologies: Strain sweep tests, Damage investigation and temperature sweep tests.
Resumo:
A really particular and innovative metal-polymer sandwich material is Hybrix. Hybrix is a product developed and manufactured by Lamera AB, Gothenburg, Sweden. This innovative hybrid material is composed by two relatively thin metal layers if compared to the core thickness. The most used metals are aluminum and stainless steel and are separated by a core of nylon fibres oriented perpendicularly to the metal plates. The core is then completed by adhesive layers applied at the PA66-metal interface that once cured maintain the nylon fibres in position. This special material is very light and formable. Moreover Hybrix, depending on the specific metal which is used, can achieve a good corrosion resistance and it can be cut and punched easily. Hybrix architecture itself provides extremely good bending stiffness, damping properties, insulation capability, etc., which again, of course, change in magnitude depending in the metal alloy which is used, its thickness and core thickness. For these reasons nowadays it shows potential for all the applications which have the above mentioned characteristic as a requirement. Finally Hybrix can be processed with tools used in regular metal sheet industry and can be handled as solid metal sheets. In this master thesis project, pre-formed parts of Hybrix were studied and characterized. Previous work on Hybrix was focused on analyze its market potential and different adhesive to be used in the core. All the tests were carried out on flat unformed specimens. However, in order to have a complete description of this material also the effect of the forming process must be taken into account. Thus the main activities of the present master thesis are the following: Dynamic Mechanical-Thermal Analysis (DMTA) on unformed Hybrix samples of different thickness and on pre-strained Hybrix samples, pure epoxy adhesive samples analysis and finally moisture effects evaluation on Hybrix composite structure.
Resumo:
The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs) that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2) were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax) used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR), and data mining techniques as the Classification And Regression Tree (CART) and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP) reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga
Resumo:
Dynamic global vegetation models (DGVMs) simulate surface processes such as the transfer of energy, water, CO2, and momentum between the terrestrial surface and the atmosphere, biogeochemical cycles, carbon assimilation by vegetation, phenology, and land use change in scenarios of varying atmospheric CO2 concentrations. DGVMs increase the complexity and the Earth system representation when they are coupled with atmospheric global circulation models (AGCMs) or climate models. However, plant physiological processes are still a major source of uncertainty in DGVMs. The maximum velocity of carboxylation (Vcmax), for example, has a direct impact over productivity in the models. This parameter is often underestimated or imprecisely defined for the various plant functional types (PFTs) and ecosystems. Vcmax is directly related to photosynthesis acclimation (loss of response to elevated CO2), a widely known phenomenon that usually occurs when plants are subjected to elevated atmospheric CO2 and might affect productivity estimation in DGVMs. Despite this, current models have improved substantially, compared to earlier models which had a rudimentary and very simple representation of vegetation?atmosphere interactions. In this paper, we describe this evolution through generations of models and the main events that contributed to their improvements until the current state-of-the-art class of models. Also, we describe some main challenges for further improvements to DGVMs.
Resumo:
The aim of this thesis is to test the ability of some correlative models such as Alpert correlations on 1972 and re-examined on 2011, the investigation of Heskestad and Delichatsios in 1978, the correlations produced by Cooper in 1982, to define both dynamic and thermal characteristics of a fire induced ceiling-jet flow. The flow occurs when the fire plume impinges the ceiling and develops in the radial direction of the fire axis. Both temperature and velocity predictions are decisive for sprinklers positioning, fire alarms positions, detectors (heat, smoke) positions and activation times and back-layering predictions. These correlative models will be compared with a 3D numerical simulation software CFAST. For the results comparison of temperature and velocity near the ceiling. These results are also compared with a Computational Fluid Dynamics (CFD) analysis, using ANSYS FLUENT.