950 resultados para intrinsic equilibrium constants


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have measured the adiabatic second order elastic constants of two Ni-Mn-Ga magnetic shape memory crystals with different martensitic transition temperatures, using ultrasonic methods. The temperature dependence of the elastic constants has been followed across the ferromagnetic transition and down to the martensitic transition temperature. Within experimental errors no noticeable change in any of the elastic constants has been observed at the Curie point. The temperature dependence of the shear elastic constant C' has been found to be very different for the two alloys. Such a different behavior is in agreement with recent theoretical predictions for systems undergoing multi-stage structural transitions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a recent paper A. S. Johal and D. J. Dunstan [Phys. Rev. B 73, 024106 (2006)] have applied multivariate linear regression analysis to the published data of the change in ultrasonic velocity with applied stress. The aim is to obtain the best estimates for the third-order elastic constants in cubic materials. From such an analysis they conclude that uniaxial stress data on metals turns out to be nearly useless by itself. The purpose of this comment is to point out that by a proper analysis of uniaxial stress data it is possible to obtain reliable values of third-order elastic constants in cubic metals and alloys. Cu-based shape memory alloys are used as an illustrative example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on measurements of the adiabatic second-order elastic constants of the off-stoichiometric Ni54Mn23Al23 single-crystalline Heusler alloy. The variation in the temperature dependence of the elastic constants has been investigated across the magnetic transition and over a broad temperature range. Anomalies in the temperature behavior of the elastic constants have been found in the vicinity of the magnetic phase transition. Measurements under applied magnetic field, both isothermal and variable temperature, show that the value of the elastic constants depends on magnetic order, thus giving evidence for magnetoelastic coupling in this alloy system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The formation of coherently strained three-dimensional (3D) islands on top of the wetting layer in the Stranski-Krastanov mode of growth is considered in a model in 1 + 1 dimensions accounting for the anharmonicity and nonconvexity of the real interatomic forces. It is shown that coherent 3D islands can be expected to form in compressed rather than expanded overlayers beyond a critical lattice misfit. In expanded overlayers the classical Stranski-Krastanov growth is expected to occur because the misfit dislocations can become energetically favored at smaller island sizes. The thermodynamic reason for coherent 3D islanding is incomplete wetting owing to the weaker adhesion of the edge atoms. Monolayer height islands with a critical size appear as necessary precursors of the 3D islands. This explains the experimentally observed narrow size distribution of the 3D islands. The 2D-3D transformation takes place by consecutive rearrangements of mono- to bilayer, bi- to trilayer islands, etc., after the corresponding critical sizes have been exceeded. The rearrangements are initiated by nucleation events, each one needing to overcome a lower energetic barrier than the one before. The model is in good qualitative agreement with available experimental observations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work is intended to discuss various properties and reliability aspects of higher order equilibrium distributions in continuous, discrete and multivariate cases, which contribute to the study on equilibrium distributions. At first, we have to study and consolidate the existing literature on equilibrium distributions. For this we need some basic concepts in reliability. These are being discussed in the 2nd chapter, In Chapter 3, some identities connecting the failure rate functions and moments of residual life of the univariate, non-negative continuous equilibrium distributions of higher order and that of the baseline distribution are derived. These identities are then used to characterize the generalized Pareto model, mixture of exponentials and gamma distribution. An approach using the characteristic functions is also discussed with illustrations. Moreover, characterizations of ageing classes using stochastic orders has been discussed. Part of the results of this chapter has been reported in Nair and Preeth (2009). Various properties of equilibrium distributions of non-negative discrete univariate random variables are discussed in Chapter 4. Then some characterizations of the geo- metric, Waring and negative hyper-geometric distributions are presented. Moreover, the ageing properties of the original distribution and nth order equilibrium distribu- tions are compared. Part of the results of this chapter have been reported in Nair, Sankaran and Preeth (2012). Chapter 5 is a continuation of Chapter 4. Here, several conditions, in terms of stochastic orders connecting the baseline and its equilibrium distributions are derived. These conditions can be used to rede_ne certain ageing notions. Then equilibrium distributions of two random variables are compared in terms of various stochastic orders that have implications in reliability applications. In Chapter 6, we make two approaches to de_ne multivariate equilibrium distribu- tions of order n. Then various properties including characterizations of higher order equilibrium distributions are presented. Part of the results of this chapter have been reported in Nair and Preeth (2008). The Thesis is concluded in Chapter 7. A discussion on further studies on equilib- rium distributions is also made in this chapter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Die Ionisation von H2 in intensiven Laserpulsen wird mit Hilfe der numerischen Integration der zeitabhängigen Schrödingergleichung für ein Einelektronenmodell untersucht, das die Vibrationsbewegung berücksichtigt. Die Spektren der kinetischen Elektronenenergie hängen stark von der Vibrationsquantenzahl des erzeugten H2+ Ions ab. Für bestimmte Vibrationszustände ist die Ausbeute der Elektronen in der Mitte des Plateaus stark erhöht. Der Effekt wird "channel closings" zugeschrieben, die in Atomen durch Variation der Laserintensität beobachtet wurden. The ionization of H2 in intense laser pulses is studied by numerical integration of the time-dependent Schrödinger equation for a single-active-electron model including the vibrational motion. The electron kinetic energy spectra in high-order above-threshold ionization are strongly dependent on the vibrational quantum number of the created H2+ ion. For certain vibrational states, the electron yield in the mid-plateau region is strongly enhanced. The effect is attributed to channel closings, which were previously observed in atoms by varying the laser intensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Augerelectron emission from foil-excited Ne-ions (6 to 10 MeV beam energy) has been measured. The beam-foil time-of-flight technique has been applied to study electronic transitions of metastable states (delayed spectra) and to determine their lifetimes. To achieve a line identification for the complex structure observed in the prompt spectrum, the spectrum is separated into its isoelectronic parts by an Augerelectron-ion coincidence correlating the emitted electrons and the emitting projectiles of well defined final charge states q_f. Well resolved spectra were obtained and the lines could be identified using intermediate coupling Dirac-Fock multiconfiguration calculations. From the total KLL-Augerelectron transition probabilities observed in the electronion coincidence experiment for Ne (10 MeV) the amount of projectiles with one K-hole just behind a C-target can be estimated. For foil-excited Ne-projectiles in contrast to single collision results the comparison of transition intensities for individual lines with calculated transition probabilities yields a statistical population of Li- and Be-like configurations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid growth in high data rate communication systems has introduced new high spectral efficient modulation techniques and standards such as LTE-A (long term evolution-advanced) for 4G (4th generation) systems. These techniques have provided a broader bandwidth but introduced high peak-to-average power ratio (PAR) problem at the high power amplifier (HPA) level of the communication system base transceiver station (BTS). To avoid spectral spreading due to high PAR, stringent requirement on linearity is needed which brings the HPA to operate at large back-off power at the expense of power efficiency. Consequently, high power devices are fundamental in HPAs for high linearity and efficiency. Recent development in wide bandgap power devices, in particular AlGaN/GaN HEMT, has offered higher power level with superior linearity-efficiency trade-off in microwaves communication. For cost-effective HPA design to production cycle, rigorous computer aided design (CAD) AlGaN/GaN HEMT models are essential to reflect real response with increasing power level and channel temperature. Therefore, large-size AlGaN/GaN HEMT large-signal electrothermal modeling procedure is proposed. The HEMT structure analysis, characterization, data processing, model extraction and model implementation phases have been covered in this thesis including trapping and self-heating dispersion accounting for nonlinear drain current collapse. The small-signal model is extracted using the 22-element modeling procedure developed in our department. The intrinsic large-signal model is deeply investigated in conjunction with linearity prediction. The accuracy of the nonlinear drain current has been enhanced through several issues such as trapping and self-heating characterization. Also, the HEMT structure thermal profile has been investigated and corresponding thermal resistance has been extracted through thermal simulation and chuck-controlled temperature pulsed I(V) and static DC measurements. Higher-order equivalent thermal model is extracted and implemented in the HEMT large-signal model to accurately estimate instantaneous channel temperature. Moreover, trapping and self-heating transients has been characterized through transient measurements. The obtained time constants are represented by equivalent sub-circuits and integrated in the nonlinear drain current implementation to account for complex communication signals dynamic prediction. The obtained verification of this table-based large-size large-signal electrothermal model implementation has illustrated high accuracy in terms of output power, gain, efficiency and nonlinearity prediction with respect to standard large-signal test signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The traditional task of a central bank is to preserve price stability and, in doing so, not to impair the real economy more than necessary. To meet this challenge, it is of great relevance whether inflation is only driven by inflation expectations and the current output gap or whether it is, in addition, influenced by past inflation. In the former case, as described by the New Keynesian Phillips curve, the central bank can immediately and simultaneously achieve price stability and equilibrium output, the so-called ‘divine coincidence’ (Blanchard and Galí 2007). In the latter case, the achievement of price stability is costly in terms of output and will be pursued over several periods. Similarly, it is important to distinguish this latter case, which describes ‘intrinsic’ inflation persistence, from that of ‘extrinsic’ inflation persistence, where the sluggishness of inflation is not a ‘structural’ feature of the economy but merely ‘inherited’ from the sluggishness of the other driving forces, inflation expectations and output. ‘Extrinsic’ inflation persistence is usually considered to be the less challenging case, as policy-makers are supposed to fight against the persistence in the driving forces, especially to reduce the stickiness of inflation expectations by a credible monetary policy, in order to reestablish the ‘divine coincidence’. The scope of this dissertation is to contribute to the vast literature and ongoing discussion on inflation persistence: Chapter 1 describes the policy consequences of inflation persistence and summarizes the empirical and theoretical literature. Chapter 2 compares two models of staggered price setting, one with a fixed two-period duration and the other with a stochastic duration of prices. I show that in an economy with a timeless optimizing central bank the model with the two-period alternating price-setting (for most parameter values) leads to more persistent inflation than the model with stochastic price duration. This result amends earlier work by Kiley (2002) who found that the model with stochastic price duration generates more persistent inflation in response to an exogenous monetary shock. Chapter 3 extends the two-period alternating price-setting model to the case of 3- and 4-period price durations. This results in a more complex Phillips curve with a negative impact of past inflation on current inflation. As simulations show, this multi-period Phillips curve generates a too low degree of autocorrelation and too early turnings points of inflation and is outperformed by a simple Hybrid Phillips curve. Chapter 4 starts from the critique of Driscoll and Holden (2003) on the relative real-wage model of Fuhrer and Moore (1995). While taking the critique seriously that Fuhrer and Moore’s model will collapse to a much simpler one without intrinsic inflation persistence if one takes their arguments literally, I extend the model by a term for inequality aversion. This model extension is not only in line with experimental evidence but results in a Hybrid Phillips curve with inflation persistence that is observably equivalent to that presented by Fuhrer and Moore (1995). In chapter 5, I present a model that especially allows to study the relationship between fairness attitudes and time preference (impatience). In the model, two individuals take decisions in two subsequent periods. In period 1, both individuals are endowed with resources and are able to donate a share of their resources to the other individual. In period 2, the two individuals might join in a common production after having bargained on the split of its output. The size of the production output depends on the relative share of resources at the end of period 1 as the human capital of the individuals, which is built by means of their resources, cannot fully be substituted one against each other. Therefore, it might be rational for a well-endowed individual in period 1 to act in a seemingly ‘fair’ manner and to donate own resources to its poorer counterpart. This decision also depends on the individuals’ impatience which is induced by the small but positive probability that production is not possible in period 2. As a general result, the individuals in the model economy are more likely to behave in a ‘fair’ manner, i.e., to donate resources to the other individual, the lower their own impatience and the higher the productivity of the other individual. As the (seemingly) ‘fair’ behavior is modelled as an endogenous outcome and as it is related to the aspect of time preference, the presented framework might help to further integrate behavioral economics and macroeconomics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct flash shadows. This combines the advantages of available illumination and flash photography.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrogeological research usually includes some statistical studies devised to elucidate mean background state, characterise relationships among different hydrochemical parameters, and show the influence of human activities. These goals are achieved either by means of a statistical approach or by mixing models between end-members. Compositional data analysis has proved to be effective with the first approach, but there is no commonly accepted solution to the end-member problem in a compositional framework. We present here a possible solution based on factor analysis of compositions illustrated with a case study. We find two factors on the compositional bi-plot fitting two non-centered orthogonal axes to the most representative variables. Each one of these axes defines a subcomposition, grouping those variables that lay nearest to it. With each subcomposition a log-contrast is computed and rewritten as an equilibrium equation. These two factors can be interpreted as the isometric log-ratio coordinates (ilr) of three hidden components, that can be plotted in a ternary diagram. These hidden components might be interpreted as end-members. We have analysed 14 molarities in 31 sampling stations all along the Llobregat River and its tributaries, with a monthly measure during two years. We have obtained a bi-plot with a 57% of explained total variance, from which we have extracted two factors: factor G, reflecting geological background enhanced by potash mining; and factor A, essentially controlled by urban and/or farming wastewater. Graphical representation of these two factors allows us to identify three extreme samples, corresponding to pristine waters, potash mining influence and urban sewage influence. To confirm this, we have available analysis of diffused and widespread point sources identified in the area: springs, potash mining lixiviates, sewage, and fertilisers. Each one of these sources shows a clear link with one of the extreme samples, except fertilisers due to the heterogeneity of their composition. This approach is a useful tool to distinguish end-members, and characterise them, an issue generally difficult to solve. It is worth note that the end-member composition cannot be fully estimated but only characterised through log-ratio relationships among components. Moreover, the influence of each endmember in a given sample must be evaluated in relative terms of the other samples. These limitations are intrinsic to the relative nature of compositional data

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A condition needed for testing nested hypotheses from a Bayesian viewpoint is that the prior for the alternative model concentrates mass around the small, or null, model. For testing independence in contingency tables, the intrinsic priors satisfy this requirement. Further, the degree of concentration of the priors is controlled by a discrete parameter m, the training sample size, which plays an important role in the resulting answer regardless of the sample size. In this paper we study robustness of the tests of independence in contingency tables with respect to the intrinsic priors with different degree of concentration around the null, and compare with other “robust” results by Good and Crook. Consistency of the intrinsic Bayesian tests is established. We also discuss conditioning issues and sampling schemes, and argue that conditioning should be on either one margin or the table total, but not on both margins. Examples using real are simulated data are given

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hardy-Weinberg law, formulated about 100 years ago, states that under certain assumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur in the proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p. There are many statistical tests being used to check whether empirical marker data obeys the Hardy-Weinberg principle. Among these are the classical xi-square test (with or without continuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combination with Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE) are numerical in nature, requiring the computation of a test statistic and a p-value. There is however, ample space for the use of graphics in HWE tests, in particular for the ternary plot. Nowadays, many genetical studies are using genetical markers known as Single Nucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the counts one typically computes genotype frequencies and allele frequencies. These frequencies satisfy the unit-sum constraint, and their analysis therefore falls within the realm of compositional data analysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotype frequencies can be adequately represented in a ternary plot. Compositions that are in exact HWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected in a statistical test are typically “close" to the parabola, whereas compositions that differ significantly from HWE are “far". By rewriting the statistics used to test for HWE in terms of heterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted in the ternary plot. This way, compositions can be tested for HWE purely on the basis of their position in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphical representations where large numbers of SNPs can be tested for HWE in a single graph. Several examples of graphical tests for HWE (implemented in R software), will be shown, using SNP data from different human populations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public contracting in Colombia is conflicting and inefficient. It frequently leads to damage to State property. The Colombian legal system cannot assure efficient and transparent public contracting. The cause is the institutional environment characterized by high transaction costs. Colombian law worsens the process by recognizing the principle of economic equilibrium in public contracts. This principle increasese contract incompleteness and renders impossible the use of economic incentives to control the opportunism of the economic agents. The authors present the hypothesis that the economic equilibrium principle increases the conflictive nature of public contracting. They test the hypothesis empirically. The first section of the paper presents a summary of the literature on transaction costs economics, as well as the legal literature on the historical origin and the content of the economic equilibrium principle. The second section describes the methodology of the empirical study. The third section shows the empirical evidence of the effects that the economic equilibrium principle exerts over the public contracting. The last section presents the conclusions.