935 resultados para Minimum norm
Resumo:
The ability to undertake repeat measurements of flow-mediated dilatation (FMD) within a short time of a previous measurement would be useful to improve accuracy or to repeat a failed initial procedure. Although standard methods report that a minimum of 10 min is required between measurements, there is no published data to support this. Thirty healthy volunteers had five FMD measurements performed within a 2-h period, separated by various time intervals (5, 15 and 30 min). In 19 volunteers, FMD was also performed as soon as the vessel had returned to its baseline diameter. There was no significant difference between any of the FMD measurements or parameters across the visits indicating that repeat measurements may be taken after a minimum of 5 min or as soon as the vessel has returned to its baseline diameter, which in some subjects may be less than 5 min.
Resumo:
A detailed analysis is presented of solar UV spectral irradiance for the period between May 2003 and August 2005, when data are available from both the Solar Ultraviolet pectral Irradiance Monitor (SUSIM) instrument (on board the pper Atmosphere Research Satellite (UARS) spacecraft) and the Solar Stellar Irradiance Comparison Experiment (SOLSTICE) instrument (on board the Solar Radiation and Climate Experiment (SORCE) satellite). The ultimate aim is to develop a data composite that can be used to accurately determine any differences between the “exceptional” solar minimum at the end of solar cycle 23 and the previous minimum at the end of solar cycle 22 without having to rely on proxy data to set the long‐term change. SUSIM data are studied because they are the only data available in the “SOLSTICE gap” between the end of available UARS SOLSTICE data and the start of the SORCE data. At any one wavelength the two data sets are considered too dissimilar to be combined into a meaningful composite if any one of three correlations does not exceed a threshold of 0.8. This criterion removes all wavelengths except those in a small range between 156 nm and 208 nm, the longer wavelengths of which influence ozone production and heating in the lower stratosphere. Eight different methods are employed to intercalibrate the two data sequences. All methods give smaller changes between the minima than are seen when the data are not adjusted; however, correcting the SUSIM data to allow for an exponentially decaying offset drift gives a composite that is largely consistent with the unadjusted data from the SOLSTICE instruments on both UARS and SORCE and in which the recent minimum is consistently lower in the wave band studied.
Resumo:
Internal risk management models of the kind popularized by J. P. Morgan are now used widely by the world’s most sophisticated financial institutions as a means of measuring risk. Using the returns on three of the most popular futures contracts on the London International Financial Futures Exchange, in this paper we investigate the possibility of using multivariate generalized autoregressive conditional heteroscedasticity (GARCH) models for the calculation of minimum capital risk requirements (MCRRs). We propose a method for the estimation of the value at risk of a portfolio based on a multivariate GARCH model. We find that the consideration of the correlation between the contracts can lead to more accurate, and therefore more appropriate, MCRRs compared with the values obtained from a univariate approach to the problem.
Resumo:
This paper investigates the frequency of extreme events for three LIFFE futures contracts for the calculation of minimum capital risk requirements (MCRRs). We propose a semiparametric approach where the tails are modelled by the Generalized Pareto Distribution and smaller risks are captured by the empirical distribution function. We compare the capital requirements form this approach with those calculated from the unconditional density and from a conditional density - a GARCH(1,1) model. Our primary finding is that both in-sample and for a hold-out sample, our extreme value approach yields superior results than either of the other two models which do not explicitly model the tails of the return distribution. Since the use of these internal models will be permitted under the EC-CAD II, they could be widely adopted in the near future for determining capital adequacies. Hence, close scrutiny of competing models is required to avoid a potentially costly misallocation capital resources while at the same time ensuring the safety of the financial system.
Resumo:
The recent low and prolonged minimum of the solar cycle, along with the slow growth in activity of the new cycle, has led to suggestions that the Sun is entering a Grand Solar Minimum (GSMi), potentially as deep as the Maunder Minimum (MM). This raises questions about the persistence and predictability of solar activity. We study the autocorrelation functions and predictability R^2_L(t) of solar indices, particularly group sunspot number R_G and heliospheric modulation potential phi for which we have data during the descent into the MM. For R_G and phi, R^2_L (t) > 0.5 for times into the future of t = 4 and 3 solar cycles, respectively: sufficient to allow prediction of a GSMi onset. The lower predictability of sunspot number R_Z is discussed. The current declines in peak and mean R_G are the largest since the onset of the MM and exceed those around 1800 which failed to initiate a GSMi.
Resumo:
Open solar flux (OSF) variations can be described by the imbalance between source and loss terms. We use spacecraft and geomagnetic observations of OSF from 1868 to present and assume the OSF source, S, varies with the observed sunspot number, R. Computing the required fractional OSF loss, χ, reveals a clear solar cycle variation, in approximate phase with R. While peak R varies significantly from cycle to cycle, χ is surprisingly constant in both amplitude and waveform. Comparisons of χ with measures of heliospheric current sheet (HCS) orientation reveal a strong correlation. The cyclic nature of χ is exploited to reconstruct OSF back to the start of sunspot records in 1610. This agrees well with the available spacecraft, geomagnetic, and cosmogenic isotope observations. Assuming S is proportional to R yields near-zero OSF throughout the Maunder Minimum. However, χ becomes negative during periods of low R, particularly the most recent solar minimum, meaning OSF production is underestimated. This is related to continued coronal mass ejection (CME) activity, and therefore OSF production, throughout solar minimum, despite R falling to zero. Correcting S for this produces a better match to the recent solar minimum OSF observations. It also results in a cycling, nonzero OSF during the Maunder Minimum, in agreement with cosmogenic isotope observations. These results suggest that during the Maunder Minimum, HCS tilt cycled as over recent solar cycles, and the CME rate was roughly constant at the levels measured during the most recent two solar minima.
Resumo:
We study the feasibility of using the singular vector technique to create initial condition perturbations for short-range ensemble prediction systems (SREPS) focussing on predictability of severe local storms and in particular deep convection. For this a new final time semi-norm based on the convective available potential energy (CAPE) is introduced. We compare singular vectors using the CAPE-norm with SVs using the more common total energy (TE) norm for a 2-week summer period in 2007, which includes a case of mesoscale extreme rainfall in the south west of Finland. The CAPE singular vectors perturb the CAPE field by increasing the specific humidity and temperature of the parcel and increase the lapse rate above the parcel in the lower troposphere consistent with physical considerations. The CAPE-SVs are situated in the lower troposphere. This in contrast to TE-SVs with short optimization times which predominantly remain in the high troposphere. By examining the time evolution of the CAPE singular values we observe that the convective event in the south west of Finland is clearly associated with high CAPE singular values.
Resumo:
We study the empirical performance of the classical minimum-variance hedging strategy, comparing several econometric models for estimating hedge ratios of crude oil, gasoline and heating oil crack spreads. Given the great variability and large jumps in both spot and futures prices, considerable care is required when processing the relevant data and accounting for the costs of maintaining and re-balancing the hedge position. We find that the variance reduction produced by all models is statistically and economically indistinguishable from the one-for-one “naïve” hedge. However, minimum-variance hedging models, especially those based on GARCH, generate much greater margin and transaction costs than the naïve hedge. Therefore we encourage hedgers to use a naïve hedging strategy on the crack spread bundles now offered by the exchange; this strategy is the cheapest and easiest to implement. Our conclusion contradicts the majority of the existing literature, which favours the implementation of GARCH-based hedging strategies.
Resumo:
In this paper we study Dirichlet convolution with a given arithmetical function f as a linear mapping 'f that sends a sequence (an) to (bn) where bn = Pdjn f(d)an=d.
We investigate when this is a bounded operator on l2 and ¯nd the operator norm. Of particular interest is the case f(n) = n¡® for its connection to the Riemann zeta
function on the line 1, 'f is bounded with k'f k = ³(®). For the unbounded case, we show that 'f : M2 ! M2 where M2 is the subset of l2 of multiplicative sequences, for many f 2 M2. Consequently, we study the `quasi'-norm sup kak = T a 2M2 k'fak kak
for large T, which measures the `size' of 'f on M2. For the f(n) = n¡® case, we show this quasi-norm has a striking resemblance to the conjectured maximal order of
j³(® + iT )j for ® > 12 .
Resumo:
We develop a new sparse kernel density estimator using a forward constrained regression framework, within which the nonnegative and summing-to-unity constraints of the mixing weights can easily be satisfied. Our main contribution is to derive a recursive algorithm to select significant kernels one at time based on the minimum integrated square error (MISE) criterion for both the selection of kernels and the estimation of mixing weights. The proposed approach is simple to implement and the associated computational cost is very low. Specifically, the complexity of our algorithm is in the order of the number of training data N, which is much lower than the order of N2 offered by the best existing sparse kernel density estimators. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to those of the classical Parzen window estimate and other existing sparse kernel density estimators.
Resumo:
The behavior of the Sun and near-Earth space during grand solar minima is not understood; however, the recent long and low minimum of the decadal-scale solar cycle gives some important clues, with implications for understanding the solar dynamo and predicting space weather conditions. The speed of the near-Earth solar wind and the strength of the interplanetary magnetic field (IMF) embedded within it can be reliably reconstructed for before the advent of spacecraft monitoring using observations of geomagnetic activity that extend back to the mid-19th century. We show that during the solar cycle minima around 1879 and 1901 the average solar wind speed was exceptionally low, implying the Earth remained within the streamer belt of slow solar wind flow for extended periods. This is consistent with a broader streamer belt, which was also a feature of the recent low minimum (2009), and yields a prediction that the low near-Earth IMF during the Maunder minimum (1640-1700), as derived from models and deduced from cosmogenic isotopes, was accompanied by a persistent and relatively constant solar wind of speed roughly half the average for the modern era.