991 resultados para CHEBYSHEV-TYPE QUADRATURE RULES
Resumo:
2000 Mathematics Subject Classification: 26A33 (main), 44A40, 44A35, 33E30, 45J05, 45D05
Resumo:
Land-use change, particularly clearing of forests for agriculture, has contributed significantly to the observed rise in atmospheric carbon dioxide concentration. Concern about the impacts on climate has led to efforts to monitor and curtail the rapid increase in concentrations of carbon dioxide and other greenhouse gases in the atmosphere. Internationally, much of the current focus is on the Kyoto Protocol to the United Nations Framework Convention on Climate Change (UNFCCC). Although electing to not ratify the Protocol, Australia, as a party to the UNFCCC, reports on national greenhouse gas emissions, trends in emissions and abatement measures. In this paper we review the complex accounting rules for human activities affecting greenhouse gas fluxes in the terrestrial biosphere and explore implications and potential opportunities for managing carbon in the savanna ecosystems of northern Australia. Savannas in Australia are managed for grazing as well as for cultural and environmental values against a background of extreme climate variability and disturbance, notably fire. Methane from livestock and non-CO2 emissions from burning are important components of the total greenhouse gas emissions associated with management of savannas. International developments in carbon accounting for the terrestrial biosphere bring a requirement for better attribution of change in carbon stocks and more detailed and spatially explicit data on such characteristics of savanna ecosystems as fire regimes, production and type of fuel for burning, drivers of woody encroachment, rates of woody regrowth, stocking rates and grazing impacts. The benefits of improved biophysical information and of understanding the impacts on ecosystem function of natural factors and management options will extend beyond greenhouse accounting to better land management for multiple objectives.
Resumo:
Light gauge cold-formed steel sections have been developed as more economical building solutions to the alternative heavier hot-rolled sections in the commercial and residential markets. Cold-formed lipped channel beams (LCB), LiteSteel beams (LSB) and triangular hollow flange beams (THFB) are commonly used as flexural members such as floor joists and bearers while rectangular hollow flange beams (RHFB) are used in small scale housing developments through to large building structures. However, their shear capacities are determined based on conservative design rules. For the shear design of cold-formed steel beams, their elastic shear buckling strength and the potential post-buckling strength must be determined accurately. Hence experimental and numerical studies were conducted to investigate the shear behaviour and strength of LCBs, LSBs, THFBs and RHFBs. Improved shear design rules including the direct strength method (DSM) based design equations were developed to determine the ultimate shear capacities of these open and hollow flange steel beams. An improved equation for the higher elastic shear buckling coefficient of cold-formed steel beams was proposed based on finite element analysis results and included in the design equations. A new post-buckling coefficient was also introduced in the design equations to include the available post-buckling strength of cold-formed steel beams. This paper presents the details of this study on cold-formed steel beams subject to shear, and the results. It proposes generalised and improved shear design rules that can be used for any type of cold-formed steel beam.
Resumo:
The magnetic moment μB of a baryon B with quark content (aab) is written as μB=4ea(1+δB)eħ/2cMB, where ea is the charge of the quark of flavor type a. The experimental values of δB have a simple pattern and have a natural explanation within QCD. Using the ratio method, the QCD sum rules are analyzed and the values of δB are computed. We find good agreement with data (≊10%) for the nucleons and the Σ multiplet while for the cascade the agreement is not as good. In our analysis we have incorporated additional terms in the operator-product expansion as compared to previous authors. We also clarify some points of disagreement between the previous authors. External-field-induced correlations describing the magnetic properties of the vacuum are estimated from the baryon magnetic-moment sum rules themselves as well as by independent spectral representations and the results are contrasted.
Resumo:
By solving numerically the full Maxwell-Bloch equations without the slowly varying envelope approximation and the rotating-wave approximation, we investigate the effects of Lorentz local field correction (LFC) on the propagation properties of few-cycle laser pulse in a dense A-type three-level atomic medium. We find that: when the area of the input pulse is larger, split of pulse occurs and the number of the sub-pulses with LFC is larger than that without LFC; at the same distance, the time interval between the first sub-pulse and the second sub-pulse in the case without LFC is longer than that with LFC, the time of pulse appearing in the case without LFC is later than that in the case with LFC, and the two phenomena are more obvious with propagation distance increasing; time evolution rules of the populations of levels vertical bar 1 >, vertical bar 2 > and vertical bar 3 > in the two cases with and without LFC are much different. When the area of the input pulse is smaller, effects of LFC on time evolutions of the pulse and populations are remarkably smaller than those in the case of larger area pulse. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Deep level transient spectroscopy (DLTS) technique was used to investigate deep electron states in n-type Al-doped ZnS1-xTex epilayers grown by molecular fiction epitaxy (MBE), Deep level transient Fourier spectroscopy (DLTFS) spectra of the Al-doped ZnS1-xTex (x = 0. 0.017, 0.04 and 0.046. respectively) epilayers reveal that At doping leads to the formation of two electron traps at 0.21 and 0.39 eV below the conduction hand. 1)DLTFS results suggest that in addition to the rules of Te as a component of [lie alloy as well as isoelectronic centers, Te is also involved in the formation of all electron trip, whose energy level relative to the conduction hand decreases a, Te composition increases.
Resumo:
Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation. Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed as user-defined methods, which are "proof recipes" that take arguments and dynamically perform appropriate deductions. Methods arise naturally via parametric abstraction over type-alpha proofs. In that light, the evaluation of a method call can be viewed as a computation that carries out a type-alpha deduction. The type-alpha proof "unwound" by such a method call is called the "certificate" of the call. Certificates can be checked by exceptionally simple type-alpha interpreters, and thus they are useful whenever we wish to minimize our trusted base. Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities, in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in a style that is disciplined enough to ensure soundness yet fluid enough to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules. We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.
Resumo:
This paper formally defines the operational semantic for TRAFFIC, a specification language for flow composition applications proposed in BUCS-TR-2005-014, and presents a type system based on desired safety assurance. We provide proofs on reduction (weak-confluence, strong-normalization and unique normal form), on soundness and completeness of type system with respect to reduction, and on equivalence classes of flow specifications. Finally, we provide a pseudo-code listing of a syntax-directed type checking algorithm implementing rules of the type system capable of inferring the type of a closed flow specification.
Resumo:
We probe the systematic uncertainties from the 113 Type Ia supernovae (SN Ia) in the Pan-STARRS1 (PS1) sample along with 197 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ∼0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.037 ± 0.031 mag for host galaxies with high and low masses. Assuming flatness and including systematic uncertainties in our analysis of only SNe measurements, we find w = -1.120+0.360-0.206(Stat)+0.269-0.291(Sys). With additional constraints from Baryon acoustic oscillation, cosmic microwave background (CMB) (Planck) and H0 measurements, we find w = -1.166+0.072-0.069 and Ωm = 0.280+0.013-0.012 (statistical and systematic errors added in quadrature). The significance of the inconsistency with w = -1 depends on whether we use Planck or Wilkinson Microwave Anisotropy Probe measurements of the CMB: wBAO+H0+SN+WMAP = -1.124+0.083-0.065.
Resumo:
This paper derives optimal monetary policy rules in setups where certainty equivalence does not hold because either central bank preferences are not quadratic, and/or the aggregate supply relation is nonlinear. Analytical results show that these features lead to sign and size asymmetries, and nonlinearities in the policy rule. Reduced-form estimates indicate that US monetary policy can be characterized by a nonlinear policy rule after 1983, but not before 1979. This finding is consistent with the view that the Fed's inflation preferences during the Volcker-Greenspan regime differ considerably from the ones during the Burns-Miller regime.
Resumo:
Les interactions entre les squelettes sucre-phosphate de nucléotides jouent un rôle important dans la stabilisation des structures tertiaires de larges molécules d’ARN. Elles sont régies par des règles particulières qui gouverne leur formation mais qui jusque là demeure quasiment inconnues. Un élément structural d’ARN pour lequel les interactions sucre-phosphate sont importantes est le motif d’empaquetage de deux doubles hélices d’ARN le long du sillon mineur. Ce motif se trouve à divers endroits dans la structure du ribosome. Il consiste en deux doubles hélices interagissant de manière à ce que le squelette sucre-phosphate de l’une se niche dans le sillon mineur de l’autre et vice versa. La surface de contact entre les deux hélices est majoritairement formée par les riboses et implique au total douze nucléotides. La présente thèse a pour but d’analyser la structure interne de ce motif et sa dépendance de stabilité résultant de l’association optimale ou non des hélices, selon leurs séquences nucléotidiques. Il est démontré dans cette thèse qu’un positionnement approprié des riboses leur permet de former des contacts inter-hélices, par l’entremise d’un choix particulier de l’identité des pairs de bases impliquées. Pour différentes pairs de bases participant à ce contact inter-hélices, l’identité optimale peut être du type Watson-Crick, GC/CG, or certaines pairs de bases non Watson-Crick. Le choix adéquat de paires de bases fournit une interaction inter-hélice stable. Dans quelques cas du motif, l’identité de certaines paires de bases ne correspond pas à la structure la plus stable, ce qui pourrait refléter le fait que ces motifs devraient avoir une liberté de formation et de déformation lors du fonctionnement du ribosome.
Resumo:
In this paper we propose methods for computing Fresnel integrals based on truncated trapezium rule approximations to integrals on the real line, these trapezium rules modified to take into account poles of the integrand near the real axis. Our starting point is a method for computation of the error function of complex argument due to Matta and Reichel (J Math Phys 34:298–307, 1956) and Hunter and Regan (Math Comp 26:539–541, 1972). We construct approximations which we prove are exponentially convergent as a function of N , the number of quadrature points, obtaining explicit error bounds which show that accuracies of 10−15 uniformly on the real line are achieved with N=12 , this confirmed by computations. The approximations we obtain are attractive, additionally, in that they maintain small relative errors for small and large argument, are analytic on the real axis (echoing the analyticity of the Fresnel integrals), and are straightforward to implement.
Resumo:
We investigate the critical behaviour of a probabilistic mixture of cellular automata (CA) rules 182 and 200 (in Wolfram`s enumeration scheme) by mean-field analysis and Monte Carlo simulations. We found that as we switch off one CA and switch on the other by the variation of the single parameter of the model, the probabilistic CA (PCA) goes through an extinction-survival-type phase transition, and the numerical data indicate that it belongs to the directed percolation universality class of critical behaviour. The PCA displays a characteristic stationary density profile and a slow, diffusive dynamics close to the pure CA 200 point that we discuss briefly. Remarks on an interesting related stochastic lattice gas are addressed in the conclusions.
Resumo:
When policy rules are changed, the effect of nominal rigidities should be modelled through endogenous pricing rules. We endogenize Taylor (1979) type pricing rule to examine the output effects of monetary disinflations. We derive optimal fixed-price time-dependent rules in inflationary steady states and during disinflations. We also develop a methodology to aggregate individual pricing rules which vary through disinflation. This allows us to reevaluate the output costs of monetary disinflation, including aspects as the role of the initial leveI of inflation and the importance of the degree of credibility of the policy change.