971 resultados para Zero-coupon yield curve
Resumo:
The First Order Reversal Curve (FORC) method has been utilised to understand the magnetization reversal and the extent of the irreversible magnetization of the soft CoFe2O4-hard SrFe12O19 nanocomposite in the nonexchange spring and the exchange spring regime. The single peak switching behaviour in the FORC distribution of the exchange spring composite confirms the coherent reversal of the soft and hard phases. The onset of the nucleation field and the magnetization reversal by domain wall movement are also evident from the FORC measurements. (C) 2013 AIP Publishing LLC.
Resumo:
In this paper, the storage-repair-bandwidth (SRB) trade-off curve of regenerating codes is reformulated to yield a tradeoff between two global parameters of practical relevance, namely information rate and repair rate. The new information-repair-rate (IRR) tradeoff provides a different and insightful perspective on regenerating codes. For example, it provides a new motivation for seeking to investigate constructions corresponding to the interior of the SRB tradeoff. Interestingly, each point on the SRB tradeoff corresponds to a curve in the IRR tradeoff setup. We characterize completely, functional repair under the IRR framework, while for exact repair, an achievable region is presented. In the second part of this paper, a rate-half regenerating code for the minimum storage regenerating point is constructed that draws upon the theory of invariant subspaces. While the parameters of this rate-half code are the same as those of the MISER code, the construction itself is quite different.
Resumo:
Infinite horizon discounted-cost and ergodic-cost risk-sensitive zero-sum stochastic games for controlled Markov chains with countably many states are analyzed. Upper and lower values for these games are established. The existence of value and saddle-point equilibria in the class of Markov strategies is proved for the discounted-cost game. The existence of value and saddle-point equilibria in the class of stationary strategies is proved under the uniform ergodicity condition for the ergodic-cost game. The value of the ergodic-cost game happens to be the product of the inverse of the risk-sensitivity factor and the logarithm of the common Perron-Frobenius eigenvalue of the associated controlled nonlinear kernels. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
We consider a discrete time partially observable zero-sum stochastic game with average payoff criterion. We study the game using an equivalent completely observable game. We show that the game has a value and also we present a pair of optimal strategies for both the players.
Resumo:
There have been attempts at obtaining robust guidance laws to ensure zero miss distance (ZMD) for interceptors with parametric uncertainties. All these laws require the plant to be of minimum phase type to enable the overall guidance loop transfer function to satisfy strict positive realness (SPR). The SPR property implies absolute stability of the closed loop system, and has been shown in the literature to lead to ZMD because it avoids saturation of lateral acceleration. In these works higher order interceptors are reduced to lower order equivalent models for which control laws are designed to ensure ZMD. However, it has also been shown that when the original system with right half plane (RHP) zeros is considered, the resulting miss distances, using such strategies, can be quite high. In this paper, an alternative approach using the circle criterion establishes the conditions for absolute stability of the guidance loop and relaxes the conservative nature of some earlier results arising from assumption of in�nite engagement time. Further, a feedforward scheme in conjunction with a lead-lag compensator is used as one control strategy while a generalized sampled hold function is used as a second strategy, to shift the RHP transmission zeros, thereby achieving ZMD. It is observed that merely shifting the RHP zero(s) to the left half plane reduces miss distances signi�cantly even when no additional controllers are used to ensure SPR conditions.
Resumo:
The present article describes a working or combined calibration curve in laser-induced breakdown spectroscopic analysis, which is the cumulative result of the calibration curves obtained from neutral and singly ionized atomic emission spectral lines. This working calibration curve reduces the effect of change in matrix between different zone soils and certified soil samples because it includes both the species' (neutral and singly ionized) concentration of the element of interest. The limit of detection using a working calibration curve is found better as compared to its constituent calibration curves (i.e., individual calibration curves). The quantitative results obtained using the working calibration curve is in better agreement with the result of inductively coupled plasma-atomic emission spectroscopy as compared to the result obtained using its constituent calibration curves.
Resumo:
A combined set of thermo-mechanical steps recommended for high strength beta Ti alloy are homogenization, deformation, recrystallization, annealing and ageing steps in sequence. Recrystallization carried out above or below beta transus temperature generates either beta annealed (lath type morphology of alpha) or bimodal (lath+globular morphology of alpha) microstructure. Through variations in heat treatment parameters at these processing steps, wide ranges of length scales of features have been generated in both types of microstructures in a near beta Ti alloy, Ti-5Al-5Mo-5V-3Cr (Ti-5553). 0.2% Yield strength (YS) has been correlated to various microstructural features and associated heat treatment parameters. Relative importance of microstructural features in influencing YS has been identified. Process parameters at different steps have been identified and recommended for attaining different levels of YS for this near beta Ti alloy. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The occurrence of spurious solutions is a well-known limitation of the standard nodal finite element method when applied to electromagnetic problems. The two commonly used remedies that are used to address this problem are (i) The addition of a penalty term with the penalty factor based on the local dielectric constant, and which reduces to a Helmholtz form on homogeneous domains (regularized formulation); (ii) A formulation based on a vector and a scalar potential. Both these strategies have some shortcomings. The penalty method does not completely get rid of the spurious modes, and both methods are incapable of predicting singular eigenvalues in non-convex domains. Some non-zero spurious eigenvalues are also predicted by these methods on non-convex domains. In this work, we develop mixed finite element formulations which predict the eigenfrequencies (including their multiplicities) accurately, even for nonconvex domains. The main feature of the proposed mixed finite element formulation is that no ad-hoc terms are added to the formulation as in the penalty formulation, and the improvement is achieved purely by an appropriate choice of finite element spaces for the different variables. We show that the formulation works even for inhomogeneous domains where `double noding' is used to enforce the appropriate continuity requirements at an interface. For two-dimensional problems, the shape of the domain can be arbitrary, while for the three-dimensional ones, with our current formulation, only regular domains (which can be nonconvex) can be modeled. Since eigenfrequencies are modeled accurately, these elements also yield accurate results for driven problems. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Storage of water within a river basin is often estimated by analyzing recession flow curves as it cannot be `instantly' estimated with the aid of available technologies. In this study we explicitly deal with the issue of estimation of `drainable' storage, which is equal to the area under the `complete' recession flow curve (i.e. a discharge vs. time curve where discharge continuously decreases till it approaches zero). But a major challenge in this regard is that recession curves are rarely `complete' due to short inter-storm time intervals. Therefore, it is essential to analyze and model recession flows meaningfully. We adopt the wellknown Brutsaert and Nieber analytical method that expresses time derivative of discharge (dQ/dt) as a power law function of Q : -dQ/dt = kQ(alpha). However, the problem with dQ/dt-Q analysis is that it is not suitable for late recession flows. Traditional studies often compute alpha considering early recession flows and assume that its value is constant for the whole recession event. But this approach gives unrealistic results when alpha >= 2, a common case. We address this issue here by using the recently proposed geomorphological recession flow model (GRFM) that exploits the dynamics of active drainage networks. According to the model, alpha is close to 2 for early recession flows and 0 for late recession flows. We then derive a simple expression for drainable storage in terms the power law coefficient k, obtained by considering early recession flows only, and basin area. Using 121 complete recession curves from 27 USGS basins we show that predicted drainable storage matches well with observed drainable storage, indicating that the model can also reliably estimate drainable storage for `incomplete' recession events to address many challenges related to water resources. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Aims. In this work we search for the signatures of low-dimensional chaos in the temporal behavior of the Kepler-field blazar W2R 1946+42. Methods. We use a publicly available, similar to 160 000-point-long and mostly equally spaced light curve of W2R 1946+42. We apply the correlation integral method to both real datasets and phase randomized surrogates. Results. We are not able to confirm the presence of low-dimensional chaos in the light curve. This result, however, still leads to some important implications for blazar emission mechanisms, which are discussed.
Resumo:
Compost, vermicompost and biochar amendments are thought to improve soil quality and plant yield. However, little is known about their long-term impact on crop yield and the environment in tropical agro-ecosystems. In this study we investigated the effect of organic amendments (buffalo manure, compost and verrnicompost) and biochar (applied alone or with vermicompost) on plant yield, soil fertility, soil erosion and water dynamics in a degraded Acrisol in Vietnam. Maize growth and yield, as well as weed growth, were examined for three years in terrestrial mesocosms under natural rainfall. Maize yield and growth showed high inter-annual variability depending on the organic amendment. Vermicompost improved maize growth and yield but its effect was rather small and was only significant when water availability was limited (year 2). This suggests that vermicompost could be a promising substrate for improving the resistance of agrosystems to water stress. When the vermicompost biochar mixture was applied, further growth and yield improvements were recorded in some cases. When applied alone, biochar had a positive influence on maize yield and growth, thus confirming its interest for improving long-term soil productivity. All organic amendments reduced water runoff, soil detachment and NH4+ and NO3- transfer to water. These effects were more significant with vermicompost than with buffalo manure and compost, highlighting that the beneficial influence of vermicompost is not limited to its influence on plant yield. In addition, this study showed for the first time that the combination of vermicompost and biochar may not only improve plant productivity but also reduce the negative impact of agriculture on water quality. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
A generalized explanation is provided for the existence of the red-and blue-shifting nature of X-Z bonds (Z = H, halogens, chalcogens, pnicogens, etc.) in X-Z center dot center dot center dot Y complexes based on computational studies on a selected set of weakly bonded complexes and analysis of existing literature data. The additional electrons and orbitals available on Z in comparison to H make for dramatic differences between the H-bond and the rest of the Z-bonds. The nature of the X-group and its influence on the X-Z bond length in the parent X-Z molecule largely controls the change in the X-Z bond length on X-Z center dot center dot center dot Y bond formation; the Y-group usually influences only the magnitude of the effects controlled by X. The major factors which control the X-Z bond length change are: (a) negative hyperconjugative donation of electron density from X-group to X-Z sigma* antibonding molecular orbital (ABMO) in the parent X-Z, (b) induced negative hyperconjugation from the lone pair of electrons on Z to the antibonding orbitals of the X-group, and (c) charge transfer (CT) from the Y-group to the X-Z sigma* orbital. The exchange repulsion from the Y-group that shifts partial electron density at the X-Z sigma* ABMO back to X leads to blue-shifting and the CT from the Y-group to the sigma* ABMO of X-Z leads to red-shifting. The balance between these two opposing forces decides red-, zero- or blue-shifting. A continuum of behaviour of X-Z bond length variation is inevitable in X-Z center dot center dot center dot Y complexes.
Resumo:
We present an analysis of the rate of sign changes in the discrete Fourier spectrum of a sequence. The sign changes of either the real or imaginary parts of the spectrum are considered, and the rate of sign changes is termed as the spectral zero-crossing rate (SZCR). We show that SZCR carries information pertaining to the locations of transients within the temporal observation window. We show duality with temporal zero-crossing rate analysis by expressing the spectrum of a signal as a sum of sinusoids with random phases. This extension leads to spectral-domain iterative filtering approaches to stabilize the spectral zero-crossing rate and to improve upon the location estimates. The localization properties are compared with group-delay-based localization metrics in a stylized signal setting well-known in speech processing literature. We show applications to epoch estimation in voiced speech signals using the SZCR on the integrated linear prediction residue. The performance of the SZCR-based epoch localization technique is competitive with the state-of-the-art epoch estimation techniques that are based on average pitch period.
Resumo:
The charge-pump (CP) mismatch current is a dominant source of static phase error and reference spur in the nano-meter CMOS PLL implementations due to its worsened channel length modulation effect. This paper presents a charge-pump (CP) mismatch current reduction technique utilizing an adaptive body bias tuning of CP transistors and a zero CP mismatch current tracking PLL architecture for reference spur suppression. A chip prototype of the proposed circuit was implemented in 0.13 mu m CMOS technology. The frequency synthesizer consumes 8.2 mA current from a 13 V supply voltage and achieves a phase noise of -96.01 dBc/Hz @ 1 MHz offset from a 2.4 GHz RF carrier. The charge-pump measurements using the proposed calibration technique exhibited a mismatch current of less than 0.3 mu A (0.55%) over the VCO control voltage range of 0.3-1.0 V. The closed loop measurements show a minimized static phase error of within +/- 70 ps and a similar or equal to 9 dB reduction in reference spur level across the PLL output frequency range 2.4-2.5 GHz. The presented CP calibration technique compensates for the DC current mismatch and the mismatch due to channel length modulation effect and therefore improves the performance of CP-PLLs in nano-meter CMOS implementations. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
In this work, the hypothesis testing problem of spectrum sensing in a cognitive radio is formulated as a Goodness-of-fit test against the general class of noise distributions used in most communications-related applications. A simple, general, and powerful spectrum sensing technique based on the number of weighted zero-crossings in the observations is proposed. For the cases of uniform and exponential weights, an expression for computing the near-optimal detection threshold that meets a given false alarm probability constraint is obtained. The proposed detector is shown to be robust to two commonly encountered types of noise uncertainties, namely, the noise model uncertainty, where the PDF of the noise process is not completely known, and the noise parameter uncertainty, where the parameters associated with the noise PDF are either partially or completely unknown. Simulation results validate our analysis, and illustrate the performance benefits of the proposed technique relative to existing methods, especially in the low SNR regime and in the presence of noise uncertainties.