958 resultados para weak ferromagnetism
Resumo:
Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.
Resumo:
Sitophilus oryzae (Linnaeus) is a major pest of stored grain across Southeast Asia and is of increasing concern in other regions due to the advent of strong resistance to phosphine, the fumigant used to protect stored grain from pest insects. We investigated the inheritance of genes controlling resistance to phosphine in a strongly resistant S. oryzae strain (NNSO7525) collected in Australia and find that the trait is autosomally inherited and incompletely recessive with a degree of dominance of -0.66. The strongly resistant strain has an LC50 52 times greater than a susceptible reference strain (LS2) and 9 times greater than a weakly resistant strain (QSO335). Analysis of F2 and backcross progeny indicates that two or more genes are responsible for strong resistance, and that one of these genes, designated Sorph1, not only contributes to strong resistance, but is also responsible for the weak resistance phenotype of strain QSO335. These results demonstrate that the genetic mechanism of phosphine resistance in Soryzae is similar to that of other stored product insect pests. A unique observation is that a subset of the progeny of an F1 backcross generation are more strongly resistant to phosphine than the parental strongly resistant strain, which may be caused by multiple alleles of one of the resistance genes.
Resumo:
The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
In Atlanta, the trade ministers of a dozen countries across the Pacific Rim announced that they had successfully reached a concluded agreement upon the Trans-Pacific Partnership. The debate over the TPP will now play out in legislatures across the Pacific Rim, where sentiment towards the deal is much more mixed. The ministers insisted: “After more than five years of intensive negotiations, we have come to an agreement that will support jobs, drive sustainable growth, foster inclusive development, and promote innovation across the Asia-Pacific region … The agreement achieves the goal we set forth of an ambitious, comprehensive, high standard and balanced agreement that will benefit our nation’s citizens … We expect this historic agreement to promote economic growth, support higher-paying jobs; enhance innovation, productivity and competitiveness; raise living standards; reduce poverty in our countries; and to promote transparency, good governance, and strong labor and environmental protections.” But there has been fierce criticism of the Trans-Pacific Partnership, because of both its secrecy and its substance. Nobel Laureate Professor Joseph Stiglitz has warned that the agreement is not about free trade, but about the protection of corporate monopolies. The intellectual property chapter provides for longer and stronger protection of intellectual property rights. The investment chapter provides foreign investors with the power to challenge governments under an investor-state dispute settlement (ISDS) regime. The environment chapter is weak and toothless, and seems to be little more than an exercise in greenwashing. The health annex — and many other parts of the agreement — strengthen the power of pharmaceutical companies and biotechnology developers. The text on state-owned enterprises raises concerns about public ownership of postal services, broadcasters and national broadband services.
Resumo:
Both short-range and long-range intermolecular interaction energies between two aromatic hydrocarbon molecules, both in their ground state, separated by a range of interplanar distances of 3 ∼ 4 Aring, are estimated using the standard perturbation theory. The results show that aromatic hydrocarbons can form weak sandwich dimers with larger separation between them than is normally believed in their excimers. The non-sandwich form of dimer in which the long in-plane axes of the monomers are parallel and their short in-plane axes inclined, represents an unstable orientation because this form can pass to the perfect sandwich form without an energy barrier.
Resumo:
Inheritance of resistance to phosphine fumigant was investigated in three field-collected strains of rusty grain beetle, Cryptolestes ferrugineus, Susceptible (S-strain), Weakly Resistant (Weak-R) and Strongly Resistant (Strong-R). The strains were purified for susceptibility, weak resistance and strong resistance to phosphine, respectively, to ensure homozygosity of resistance genotype. Crosses were established between S-strain × Weak-R, S-strain × Strong-R and Weak-R × Strong-R, and the dose mortality responses to phosphine of these strains and their F1, F2 and F1-backcross progeny were obtained. The fumigations were undertaken at 25 °C and 55% RH for 72 h. Weak-R and Strong-R showed resistance factors of 6.3 × and 505 × compared with S-strain at the LC50. Both weak and strong resistances were expressed as incompletely recessive with degrees of dominance of − 0.48 and − 0.43 at the LC50, respectively. Responses of F2 and F1-backcross progeny indicated the existence of one major gene in Weak-R, and at least two major genes in Strong-R, one of which was allelic with the major factor in Weak-R. Phenotypic variance analyses also estimated that the number of independently segregating genes conferring weak resistance was 1 (nE = 0.89) whereas there were two genes controlling strong resistance (nE = 1.2). The second gene, unique to Strong-R, interacted synergistically with the first gene to confer a very high level of resistance (~ 80 ×). Neither of the two major resistance genes was sex linked. Despite the similarity of the genetics of resistance to that previously observed in other pest species, a significant proportion (~ 15 to 30%) of F1 individuals survived at phosphine concentrations higher than predicted. Thus it is likely that additional dominant heritable factors, present in some individuals in the population, also influenced the resistance phenotype. Our results will help in understanding the process of selection for phosphine resistance in the field which will inform resistance management strategies. In addition, this information will provide a basis for the identification of the resistance genes.
Resumo:
Inheritance of resistance to phosphine fumigant was investigated in three field-collected strains of rusty grain beetle, Cryptolestes ferrugineus, Susceptible (S-strain), Weakly Resistant (Weak-R) and Strongly Resistant (Strong-R). The strains were purified for susceptibility, weak resistance and strong resistance to phosphine, respectively, to ensure homozygosity of resistance genotype. Crosses were established between S-strain × Weak-R, S-strain × Strong-R and Weak-R × Strong-R, and the dose mortality responses to phosphine of these strains and their F1, F2 and F1-backcross progeny were obtained. The fumigations were undertaken at 25 °C and 55% RH for 72 h. Weak-R and Strong-R showed resistance factors of 6.3 × and 505 × compared with S-strain at the LC50. Both weak and strong resistances were expressed as incompletely recessive with degrees of dominance of − 0.48 and − 0.43 at the LC50, respectively. Responses of F2 and F1-backcross progeny indicated the existence of one major gene in Weak-R, and at least two major genes in Strong-R, one of which was allelic with the major factor in Weak-R. Phenotypic variance analyses also estimated that the number of independently segregating genes conferring weak resistance was 1 (nE = 0.89) whereas there were two genes controlling strong resistance (nE = 1.2). The second gene, unique to Strong-R, interacted synergistically with the first gene to confer a very high level of resistance (~ 80 ×). Neither of the two major resistance genes was sex linked. Despite the similarity of the genetics of resistance to that previously observed in other pest species, a significant proportion (~ 15 to 30%) of F1 individuals survived at phosphine concentrations higher than predicted. Thus it is likely that additional dominant heritable factors, present in some individuals in the population, also influenced the resistance phenotype. Our results will help in understanding the process of selection for phosphine resistance in the field which will inform resistance management strategies. In addition, this information will provide a basis for the identification of the resistance genes.
Resumo:
Retrospective identification of fire severity can improve our understanding of fire behaviour and ecological responses. However, burnt area records for many ecosystems are non-existent or incomplete, and those that are documented rarely include fire severity data. Retrospective analysis using satellite remote sensing data captured over extended periods can provide better estimates of fire history. This study aimed to assess the relationship between the Landsat differenced normalised burn ratio (dNBR) and field measured geometrically structured composite burn index (GeoCBI) for retrospective analysis of fire severity over a 23 year period in sclerophyll woodland and heath ecosystems. Further, we assessed for reduced dNBR fire severity classification accuracies associated with vegetation regrowth at increasing time between ignition and image capture. This was achieved by assessing four Landsat images captured at increasing time since ignition of the most recent burnt area. We found significant linear GeoCBI–dNBR relationships (R2 = 0.81 and 0.71) for data collected across ecosystems and for Eucalyptus racemosa ecosystems, respectively. Non-significant and weak linear relationships were observed for heath and Melaleuca quinquenervia ecosystems, suggesting that GeoCBI–dNBR was not appropriate for fire severity classification in specific ecosystems. Therefore, retrospective fire severity was classified across ecosystems. Landsat images captured within ~ 30 days after fire events were minimally affected by post burn vegetation regrowth.
Resumo:
Tools known as maximal functions are frequently used in harmonic analysis when studying local behaviour of functions. Typically they measure the suprema of local averages of non-negative functions. It is essential that the size (more precisely, the L^p-norm) of the maximal function is comparable to the size of the original function. When dealing with families of operators between Banach spaces we are often forced to replace the uniform bound with the larger R-bound. Hence such a replacement is also needed in the maximal function for functions taking values in spaces of operators. More specifically, the suprema of norms of local averages (i.e. their uniform bound in the operator norm) has to be replaced by their R-bound. This procedure gives us the Rademacher maximal function, which was introduced by Hytönen, McIntosh and Portal in order to prove a certain vector-valued Carleson's embedding theorem. They noticed that the sizes of an operator-valued function and its Rademacher maximal function are comparable for many common range spaces, but not for all. Certain requirements on the type and cotype of the spaces involved are necessary for this comparability, henceforth referred to as the “RMF-property”. It was shown, that other objects and parameters appearing in the definition, such as the domain of functions and the exponent p of the norm, make no difference to this. After a short introduction to randomized norms and geometry in Banach spaces we study the Rademacher maximal function on Euclidean spaces. The requirements on the type and cotype are considered, providing examples of spaces without RMF. L^p-spaces are shown to have RMF not only for p greater or equal to 2 (when it is trivial) but also for 1 < p < 2. A dyadic version of Carleson's embedding theorem is proven for scalar- and operator-valued functions. As the analysis with dyadic cubes can be generalized to filtrations on sigma-finite measure spaces, we consider the Rademacher maximal function in this case as well. It turns out that the RMF-property is independent of the filtration and the underlying measure space and that it is enough to consider very simple ones known as Haar filtrations. Scalar- and operator-valued analogues of Carleson's embedding theorem are also provided. With the RMF-property proven independent of the underlying measure space, we can use probabilistic notions and formulate it for martingales. Following a similar result for UMD-spaces, a weak type inequality is shown to be (necessary and) sufficient for the RMF-property. The RMF-property is also studied using concave functions giving yet another proof of its independence from various parameters.
Resumo:
The data obtained in the earlier parts of this series for the donor and acceptor end parameters of N-H. O and O-H. O hydrogen bonds have been utilised to obtain a qualitative working criterion to classify the hydrogen bonds into three categories: "very good" (VG), "moderately good" (MG) and weak (W). The general distribution curves for all the four parameters are found to be nearly of the Gaussian type. Assuming that the VG hydrogen bonds lie between 0 and ± la, MG hydrogen bonds between ± 1 and ± 2, W hydrogen bonds beyond ± 2 (where is the standard deviation), suitable cut-off limits for classifying the hydrogen bonds in the three categories have been derived. These limits are used to get VG and MG ranges for the four parameters 1 and θ (at the donor end) and ± and ± (at the acceptor end). The qualitative strength of a hydrogen bond is decided by the cumulative application of the criteria to all the four parameters. The criterion has been further applied to some practical examples in conformational studies such as α-helix and can be used for obtaining suitable location of hydrogen atoms to form good hydrogen bonds. An empirical approach to the energy of hydrogen bonds in the three categories has also been presented.
Resumo:
We propose a new type of high-order elements that incorporates the mesh-free Galerkin formulations into the framework of finite element method. Traditional polynomial interpolation is replaced by mesh-free interpolations in the present high-order elements, and the strain smoothing technique is used for integration of the governing equations based on smoothing cells. The properties of high-order elements, which are influenced by the basis function of mesh-free interpolations and boundary nodes, are discussed through numerical examples. It can be found that the basis function has significant influence on the computational accuracy and upper-lower bounds of energy norm, when the strain smoothing technique retains the softening phenomenon. This new type of high-order elements shows good performance when quadratic basis functions are used in the mesh-free interpolations and present elements prove advantageous in adaptive mesh and nodes refinement schemes. Furthermore, it shows less sensitive to the quality of element because it uses the mesh-free interpolations and obeys the Weakened Weak (W2) formulation as introduced in [3, 5].
Resumo:
In this paper, we describe our investigation of the cointegration and causal relationships between energy consumption and economic output in Australia over a period of five decades. The framework used in this paper is the single-sector aggregate production function, which is the first comprehensive approach used in an Australian study of this type to include energy, capital and labour as separate inputs of production. The empirical evidence points to a cointegration relationship between energy and output and implies that energy is an important variable in the cointegration space, as are conventional inputs capital and labour. We also find some evidence of bidirectional causality between GDP and energy use. Although the evidence of causality from energy use to GDP was relatively weak when using the thermal aggregate of energy use, once energy consumption was adjusted for energy quality, we found strong evidence of Granger causality from energy use to GDP in Australia over the investigated period. The results are robust, irrespective of the assumptions of linear trends in the cointegration models, and are applicable for different econometric approaches.
Resumo:
This paper examines the possibilities for interfuel substitution in Australia in view of the need to shift towards a cleaner mix of fuels and technologies to meet future energy demand and environmental goals. The translog cost function is estimated for the aggregate economy, the manufacturing sector and its subsectors, and the electricity generation subsector. The advantages of this work over previous literature relating to the Australian case are that it uses relatively recent data, focuses on energy-intensive subsectors and estimates the Morishima elasticities of substitution. The empirical evidence shown herein indicates weak-form substitutability between different energy types, and higher possibilities for substitution at lower levels of aggregation, compared with the aggregate economy. For the electricity generation subsector, which is at the centre of the CO2 emissions problem in Australia, significant but weak substitutability exists between coal and gas when the price of coal changes. A higher substitution possibility exists between coal and oil in this subsector. The evidence for the own- and cross-price elasticities, together with the results for fuel efficiencies, indicates that a large increase in relative prices could be justified to further stimulate the market for low-emission technologies.
Resumo:
The association between temperatures and risk of cardiovascular mortality has been recognized but the association drawn from previous meta-analysis was weak due to the lack of sufficient studies. This paper presented a review with updated reports in the literature about the risk of cardiovascular hospitalization in relation to different temperature exposures and examined the dose–response relationship of temperature-cardiovascular hospitalization by change in units of temperature, latitudes, and lag days. The pooled effect sizes were calculated for cold, heat, heatwave, and diurnal variation using random-effects meta-analysis, and the dose–response relationship of temperature-cardiovascular admission was modelled using random-effect meta-regression. The Cochrane Q-test and index of heterogeneity (I2) were used to evaluate heterogeneity, and Egger's test was used to evaluate publication bias. Sixty-four studies were included in meta-analysis. The pooled results suggest that for a change in temperature condition, the risk of cardiovascular hospitalization increased 2.8% (RR, 1.028; 95% CI, 1.021–1.035) for cold exposure, 2.2% (RR, 1.022; 95% CI, 1.006–1.039) for heatwave exposure, and 0.7% (RR, 1.007; 95% CI, 1.002–1.012) for an increase in diurnal temperature. However no association was observed for heat exposure. The significant dose–response relationship of temperature — cardiovascular admission was found with cold exposure and diurnal temperature. Increase in one-day lag caused a marginal reduction in risk of cardiovascular hospitalizations for cold exposure and diurnal variation, and increase in latitude was associated with a decrease in risk of cardiovascular hospitalizations for diurnal temperature only. There is a significant short-term effect of cold exposure, heatwave and diurnal variation on cardiovascular hospitalizations. Further research is needed to understand the temperature-cardiovascular relationship for different climate areas.