865 resultados para ellipse fitting
Resumo:
In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems. In this setting, recent works have shown how to get a statistics of extremes in agreement with the classical Extreme Value Theory. We pursue these investigations by giving analytical expressions of Extreme Value distribution parameters for maps that have an absolutely continuous invariant measure. We compare these analytical results with numerical experiments in which we study the convergence to limiting distributions using the so called block-maxima approach, pointing out in which cases we obtain robust estimation of parameters. In regular maps for which mixing properties do not hold, we show that the fitting procedure to the classical Extreme Value Distribution fails, as expected. However, we obtain an empirical distribution that can be explained starting from a different observable function for which Nicolis et al. (Phys. Rev. Lett. 97(21): 210602, 2006) have found analytical results.
Resumo:
This paper review the literature on the distribution of commercial real estate returns. There is growing evidence that the assumption of normality in returns is not safe. Distributions are found to be peaked, fat-tailed and, tentatively, skewed. There is some evidence of compound distributions and non-linearity. Public traded real estate assets (such as property company or REIT shares) behave in a fashion more similar to other common stocks. However, as in equity markets, it would be unwise to assume normality uncritically. Empirical evidence for UK real estate markets is obtained by applying distribution fitting routines to IPD Monthly Index data for the aggregate index and selected sub-sectors. It is clear that normality is rejected in most cases. It is often argued that observed differences in real estate returns are a measurement issue resulting from appraiser behaviour. However, unsmoothing the series does not assist in modelling returns. A large proportion of returns are close to zero. This would be characteristic of a thinly-traded market where new information arrives infrequently. Analysis of quarterly data suggests that, over longer trading periods, return distributions may conform more closely to those found in other asset markets. These results have implications for the formulation and implementation of a multi-asset portfolio allocation strategy.
Resumo:
Rheology of milk foams generated by steam injection was studied during the transient destabilization process using steady flow and dynamic oscillatory techniques: yield stress (τ_y) values were obtained from a stress ramp (0.2 to 25 Pa) and from strain amplitude sweep (0.001 to 3 at 1 Hz of frequency); elastic (G') and viscous (G") moduli were measured by frequency sweep (0.1 to 150 Hz at 0.05 of strain); and the apparent viscosity (η_a) was obtained from the flow curves generated from the stress ramp. The effect of plate roughness and the sweep time on τ_y was also assessed. Yield stress was found to increase with plate roughness whereas it decreased with the sweep time. The values of yield stress and moduli—G' and G"—increased during foam destabilization as a consequence of the changes in foam properties, especially the gas volume fraction, φ, and bubble size, R_32 (Sauter mean bubble radius). Thus, a relationship between τ_y, φ, R_32, and σ (surface tension) was established. The changes in the apparent viscosity, η, showed that the foams behaved like a shear thinning fluid beyond the yield point, fitting the modified Cross model with the relaxation time parameter (λ) also depending on the gas volume fraction. Overall, it was concluded that the viscoelastic behavior of the foam below the yield point and liquid-like behavior thereafter both vary during destabilization due to changes in the foam characteristics.
Resumo:
An ozonesonde profile over the Network for Detection of Stratospheric Change (NDSC) site at Lauder (45.0° S, 169.7° E), New Zealand, for 24 December 1998 showed atypically low ozone centered around 24 km altitude (600 K potential temperature). The origin of the anomaly is explained using reverse domain filling (RDF) calculations combined with a PV/O3 fitting technique applied to ozone measurements from the Polar Ozone and Aerosol Measurement (POAM) III instrument. The RDF calculations for two isentropic surfaces, 550 and 600 K, show that ozone-poor air from the Antarctic polar vortex reached New Zealand on 24–26 December 1998. The vortex air on the 550 K isentrope originated in the ozone hole region, unlike the air on 600 K where low ozone values were caused by dynamical effects. High-resolution ozone maps were generated, and their examination shows that a vortex remnant situated above New Zealand was the cause of the altered ozone profile on 24 December. The maps also illustrate mixing of the vortex filaments into southern midlatitudes, whereby the overall mid-latitude ozone levels were decreased.
Resumo:
The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Resumo:
Volume determination of tephra deposits is necessary for the assessment of the dynamics and hazards of explosive volcanoes. Several methods have been proposed during the past 40 years that include the analysis of crystal concentration of large pumices, integrations of various thinning relationships, and the inversion of field observations using analytical and computational models. Regardless of their strong dependence on tephra-deposit exposure and distribution of isomass/isopach contours, empirical integrations of deposit thinning trends still represent the most widely adopted strategy due to their practical and fast application. The most recent methods involve the best fitting of thinning data using various exponential seg- ments or a power-law curve on semilog plots of thickness (or mass/area) versus square root of isopach area. The exponential method is mainly sensitive to the number and the choice of straight segments, whereas the power-law method can better reproduce the natural thinning of tephra deposits but is strongly sensitive to the proximal or distal extreme of integration. We analyze a large data set of tephra deposits and propose a new empirical method for the deter- mination of tephra-deposit volumes that is based on the integration of the Weibull function. The new method shows a better agreement with observed data, reconciling the debate on the use of the exponential versus power-law method. In fact, the Weibull best fitting only depends on three free parameters, can well reproduce the gradual thinning of tephra deposits, and does not depend on the choice of arbitrary segments or of arbitrary extremes of integration.
Resumo:
The concentrations of dissolved noble gases in water are widely used as a climate proxy to determine noble gas temperatures (NGTs); i.e., the temperature of the water when gas exchange last occurred. In this paper we make a step forward to apply this principle to fluid inclusions in stalagmites in order to reconstruct the cave temperature prevailing at the time when the inclusion was formed. We present an analytical protocol that allows us accurately to determine noble gas concentrations and isotope ratios in stalagmites, and which includes a precise manometrical determination of the mass of water liberated from fluid inclusions. Most important for NGT determination is to reduce the amount of noble gases liberated from air inclusions, as they mask the temperature-dependent noble gas signal from the water inclusions. We demonstrate that offline pre-crushing in air to subsequently extract noble gases and water from the samples by heating is appropriate to separate gases released from air and water inclusions. Although a large fraction of recent samples analysed by this technique yields NGTs close to present-day cave temperatures, the interpretation of measured noble gas concentrations in terms of NGTs is not yet feasible using the available least squares fitting models. This is because the noble gas concentrations in stalagmites are not only composed of the two components air and air saturated water (ASW), which these models are able to account for. The observed enrichments in heavy noble gases are interpreted as being due to adsorption during sample preparation in air, whereas the excess in He and Ne is interpreted as an additional noble gas component that is bound in voids in the crystallographic structure of the calcite crystals. As a consequence of our study's findings, NGTs will have to be determined in the future using the concentrations of Ar, Kr and Xe only. This needs to be achieved by further optimizing the sample preparation to minimize atmospheric contamination and to further reduce the amount of noble gases released from air inclusions.
Resumo:
The dielectric constant, epsilon', and the dielectric loss, epsilon'', for gelatin films were measured in the glassy and rubbery states over a frequency range from 20 Hz to 10 MHz; epsilon' and epsilon'' were transformed into M* formalism (M* = 1/(epsilon' - i epsilon'') = M' + iM''; i, the imaginary unit). The peak of epsilon'' was masked probably due to dc conduction, but the peak of M'', e.g. the conductivity relaxation, for the gelatin used was observed. By fitting the M'' data to the Havriliak-Negami type equation, the relaxation time, tauHN, was evaluated. The value of the activation energy, Etau, evaluated from an Arrhenius plot of 1/tauHN, agreed well with that of Esigma evaluated from the DC conductivity sigma0 both in the glassy and rubbery states, indicating that the conductivity relaxation observed for the gelatin films was ascribed to ionic conduction. The value of the activation energy in the glassy state was larger than that in the rubbery state.
Resumo:
The enhanced radar return associated with melting snow, ‘the bright band’, can lead to large overestimates of rain rates. Most correction schemes rely on fitting the radar observations to a vertical profile of reflectivity (VPR) which includes the bright band enhancement. Observations show that the VPR is very variable in space and time; large enhancements occur for melting snow, but none for the melting graupel in embedded convection. Applying a bright band VPR correction to a region of embedded convection will lead to a severe underestimate of rainfall. We revive an earlier suggestion that high values of the linear depolarisation ratio (LDR) are an excellent means of detecting when bright band contamination is occurring and that the value of LDR may be used to correct the value of Z in the bright band.
Resumo:
Possible changes in the frequency and intensity of windstorms under future climate conditions during the 21st century are investigated based on an ECHAM5 GCM multi-scenario ensemble. The intensity of a storm is quantified by the associated estimated loss derived with using an empirical model. The geographical focus is ‘Core Europe’, which comprises countries of Western Europe. Possible changes of losses are analysed by comparing ECHAM5 GCM data for recent (20C, 1960 to 2000) and future climate conditions (B1, A1B, A2; 2060 to 2100), each with 3 ensemble members. Changes are quantified using both rank statistics and return periods (RP) estimated by fitting an extreme value distribution using the peak over threshold method to potential storm losses. The estimated losses for ECHAM5 20C and reanalysis events show similar statistical features in terms of return periods. Under future climate conditions, all climate scenarios show an increase in both frequency and magnitude of potential losses caused by windstorms for Core Europe. Future losses that are double the highest ECHAM5 20C loss are identified for some countries. While positive changes of ranking are significant for many countries and multiple scenarios, significantly shorter RPs are mostly found under the A2 scenario for return levels correspondent to 20 yr losses or less. The emergence time of the statistically significant changes in loss varies from 2027 to 2100. These results imply an increased risk of occurrence of windstorm-associated losses, which can be largely attributed to changes in the meteorological severity of the events. Additionally, factors such as changes in the cyclone paths and in the location of the wind signatures relative to highly populated areas are also important to explain the changes in estimated losses.
Resumo:
Full-waveform laser scanning data acquired with a Riegl LMS-Q560 instrument were used to classify an orange orchard into orange trees, grass and ground using waveform parameters alone. Gaussian decomposition was performed on this data capture from the National Airborne Field Experiment in November 2006 using a custom peak-detection procedure and a trust-region-reflective algorithm for fitting Gauss functions. Calibration was carried out using waveforms returned from a road surface, and the backscattering coefficient c was derived for every waveform peak. The processed data were then analysed according to the number of returns detected within each waveform and classified into three classes based on pulse width and c. For single-peak waveforms the scatterplot of c versus pulse width was used to distinguish between ground, grass and orange trees. In the case of multiple returns, the relationship between first (or first plus middle) and last return c values was used to separate ground from other targets. Refinement of this classification, and further sub-classification into grass and orange trees was performed using the c versus pulse width scatterplots of last returns. In all cases the separation was carried out using a decision tree with empirical relationships between the waveform parameters. Ground points were successfully separated from orange tree points. The most difficult class to separate and verify was grass, but those points in general corresponded well with the grass areas identified in the aerial photography. The overall accuracy reached 91%, using photography and relative elevation as ground truth. The overall accuracy for two classes, orange tree and combined class of grass and ground, yielded 95%. Finally, the backscattering coefficient c of single-peak waveforms was also used to derive reflectance values of the three classes. The reflectance of the orange tree class (0.31) and ground class (0.60) are consistent with published values at the wavelength of the Riegl scanner (1550 nm). The grass class reflectance (0.46) falls in between the other two classes as might be expected, as this class has a mixture of the contributions of both vegetation and ground reflectance properties.
Resumo:
It is widely accepted that some of the most accurate Value-at-Risk (VaR) estimates are based on an appropriately specified GARCH process. But when the forecast horizon is greater than the frequency of the GARCH model, such predictions have typically required time-consuming simulations of the aggregated returns distributions. This paper shows that fast, quasi-analytic GARCH VaR calculations can be based on new formulae for the first four moments of aggregated GARCH returns. Our extensive empirical study compares the Cornish–Fisher expansion with the Johnson SU distribution for fitting distributions to analytic moments of normal and Student t, symmetric and asymmetric (GJR) GARCH processes to returns data on different financial assets, for the purpose of deriving accurate GARCH VaR forecasts over multiple horizons and significance levels.
Resumo:
An analysis method for diffusion tensor (DT) magnetic resonance imaging data is described, which, contrary to the standard method (multivariate fitting), does not require a specific functional model for diffusion-weighted (DW) signals. The method uses principal component analysis (PCA) under the assumption of a single fibre per pixel. PCA and the standard method were compared using simulations and human brain data. The two methods were equivalent in determining fibre orientation. PCA-derived fractional anisotropy and DT relative anisotropy had similar signal-to-noise ratio (SNR) and dependence on fibre shape. PCA-derived mean diffusivity had similar SNR to the respective DT scalar, and it depended on fibre anisotropy. Appropriate scaling of the PCA measures resulted in very good agreement between PCA and DT maps. In conclusion, the assumption of a specific functional model for DW signals is not necessary for characterization of anisotropic diffusion in a single fibre.
Resumo:
Radiometric data in the visible domain acquired by satellite remote sensing have proven to be powerful for monitoring the states of the ocean, both physical and biological. With the help of these data it is possible to understand certain variations in biological responses of marine phytoplankton on ecological time scales. Here, we implement a sequential data-assimilation technique to estimate from a conventional nutrient–phytoplankton–zooplankton (NPZ) model the time variations of observed and unobserved variables. In addition, we estimate the time evolution of two biological parameters, namely, the specific growth rate and specific mortality of phytoplankton. Our study demonstrates that: (i) the series of time-varying estimates of specific growth rate obtained by sequential data assimilation improves the fitting of the NPZ model to the satellite-derived time series: the model trajectories are closer to the observations than those obtained by implementing static values of the parameter; (ii) the estimates of unobserved variables, i.e., nutrient and zooplankton, obtained from an NPZ model by implementation of a pre-defined parameter evolution can be different from those obtained on applying the sequences of parameters estimated by assimilation; and (iii) the maximum estimated specific growth rate of phytoplankton in the study area is more sensitive to the sea-surface temperature than would be predicted by temperature-dependent functions reported previously. The overall results of the study are potentially useful for enhancing our understanding of the biological response of phytoplankton in a changing environment.
Resumo:
We present a study of coronal mass ejections (CMEs) which impacted one of the STEREO spacecraft between January 2008 and early 2010. We focus our study on 20 CMEs which were observed remotely by the Heliospheric Imagers (HIs) onboard the other STEREO spacecraft up to large heliocentric distances. We compare the predictions of the Fixed-Φ and Harmonic Mean (HM) fitting methods, which only differ by the assumed geometry of the CME. It is possible to use these techniques to determine from remote-sensing observations the CME direction of propagation, arrival time and final speed which are compared to in-situ measurements. We find evidence that for large viewing angles, the HM fitting method predicts the CME direction better. However, this may be due to the fact that only wide CMEs can be successfully observed when the CME propagates more than 100∘ from the observing spacecraft. Overall eight CMEs, originating from behind the limb as seen by one of the STEREO spacecraft can be tracked and their arrival time at the other STEREO spacecraft can be successfully predicted. This includes CMEs, such as the events on 4 December 2009 and 9 April 2010, which were viewed 130∘ away from their direction of propagation. Therefore, we predict that some Earth-directed CMEs will be observed by the HIs until early 2013, when the separation between Earth and one of the STEREO spacecraft will be similar to the separation of the two STEREO spacecraft in 2009 – 2010.