952 resultados para Multivariate statistical method
Resumo:
The occurrence of mid-latitude windstorms is related to strong socio-economic effects. For detailed and reliable regional impact studies, large datasets of high-resolution wind fields are required. In this study, a statistical downscaling approach in combination with dynamical downscaling is introduced to derive storm related gust speeds on a high-resolution grid over Europe. Multiple linear regression models are trained using reanalysis data and wind gusts from regional climate model simulations for a sample of 100 top ranking windstorm events. The method is computationally inexpensive and reproduces individual windstorm footprints adequately. Compared to observations, the results for Germany are at least as good as pure dynamical downscaling. This new tool can be easily applied to large ensembles of general circulation model simulations and thus contribute to a better understanding of the regional impact of windstorms based on decadal and climate change projections.
Resumo:
A statistical–dynamical regionalization approach is developed to assess possible changes in wind storm impacts. The method is applied to North Rhine-Westphalia (Western Germany) using the FOOT3DK mesoscale model for dynamical downscaling and ECHAM5/OM1 global circulation model climate projections. The method first classifies typical weather developments within the reanalysis period using K-means cluster algorithm. Most historical wind storms are associated with four weather developments (primary storm-clusters). Mesoscale simulations are performed for representative elements for all clusters to derive regional wind climatology. Additionally, 28 historical storms affecting Western Germany are simulated. Empirical functions are estimated to relate wind gust fields and insured losses. Transient ECHAM5/OM1 simulations show an enhanced frequency of primary storm-clusters and storms for 2060–2100 compared to 1960–2000. Accordingly, wind gusts increase over Western Germany, reaching locally +5% for 98th wind gust percentiles (A2-scenario). Consequently, storm losses are expected to increase substantially (+8% for A1B-scenario, +19% for A2-scenario). Regional patterns show larger changes over north-eastern parts of North Rhine-Westphalia than for western parts. For storms with return periods above 20 yr, loss expectations for Germany may increase by a factor of 2. These results document the method's functionality to assess future changes in loss potentials in regional terms.
Resumo:
We consider methods of evaluating multivariate density forecasts. A recently proposed method is found to lack power when the correlation structure is mis-specified. Tests that have good power to detect mis-specifications of this sort are described. We also consider the properties of the tests in the presence of more general mis-specifications.
Resumo:
We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
A method is proposed for merging different nadir-sounding climate data records using measurements from high-resolution limb sounders to provide a transfer function between the different nadir measurements. The two nadir-sounding records need not be overlapping so long as the limb-sounding record bridges between them. The method is applied to global-mean stratospheric temperatures from the NOAA Climate Data Records based on the Stratospheric Sounding Unit (SSU) and the Advanced Microwave Sounding Unit-A (AMSU), extending the SSU record forward in time to yield a continuous data set from 1979 to present, and providing a simple framework for extending the SSU record into the future using AMSU. SSU and AMSU are bridged using temperature measurements from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), which is of high enough vertical resolution to accurately represent the weighting functions of both SSU and AMSU. For this application, a purely statistical approach is not viable since the different nadir channels are not sufficiently linearly independent, statistically speaking. The near-global-mean linear temperature trends for extended SSU for 1980–2012 are −0.63 ± 0.13, −0.71 ± 0.15 and −0.80 ± 0.17 K decade−1 (95 % confidence) for channels 1, 2 and 3, respectively. The extended SSU temperature changes are in good agreement with those from the Microwave Limb Sounder (MLS) on the Aura satellite, with both exhibiting a cooling trend of ~ 0.6 ± 0.3 K decade−1 in the upper stratosphere from 2004 to 2012. The extended SSU record is found to be in agreement with high-top coupled atmosphere–ocean models over the 1980–2012 period, including the continued cooling over the first decade of the 21st century.
Resumo:
This paper presents a GIS-based multicriteria flood risk assessment and mapping approach applied to coastal drainage basins where hydrological data are not available. It involves risk to different types of possible processes: coastal inundation (storm surge), river, estuarine and flash flood, either at urban or natural areas, and fords. Based on the causes of these processes, several environmental indicators were taken to build-up the risk assessment. Geoindicators include geological-geomorphologic proprieties of Quaternary sedimentary units, water table, drainage basin morphometry, coastal dynamics, beach morphodynamics and microclimatic characteristics. Bioindicators involve coastal plain and low slope native vegetation categories and two alteration states. Anthropogenic indicators encompass land use categories properties such as: type, occupation density, urban structure type and occupation consolidation degree. The selected indicators were stored within an expert Geoenvironmental Information System developed for the State of Sao Paulo Coastal Zone (SIIGAL), which attributes were mathematically classified through deterministic approaches, in order to estimate natural susceptibilities (Sn), human-induced susceptibilities (Sa), return period of rain events (Ri), potential damages (Dp) and the risk classification (R), according to the equation R=(Sn.Sa.Ri).Dp. Thematic maps were automatically processed within the SIIGAL, in which automata cells (""geoenvironmental management units"") aggregating geological-geomorphologic and land use/native vegetation categories were the units of classification. The method has been applied to the Northern Littoral of the State of Sao Paulo (Brazil) in 32 small drainage basins, demonstrating to be very useful for coastal zone public politics, civil defense programs and flood management.
Resumo:
The multivariate skew-t distribution (J Multivar Anal 79:93-113, 2001; J R Stat Soc, Ser B 65:367-389, 2003; Statistics 37:359-363, 2003) includes the Student t, skew-Cauchy and Cauchy distributions as special cases and the normal and skew-normal ones as limiting cases. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis of repeated measures, pretest/post-test data, under multivariate null intercept measurement error model (J Biopharm Stat 13(4):763-771, 2003) where the random errors and the unobserved value of the covariate (latent variable) follows a Student t and skew-t distribution, respectively. The results and methods are numerically illustrated with an example in the field of dentistry.
Resumo:
We report a statistical analysis of Doppler broadening coincidence data of electron-positron annihilation radiation in silicon using a (22)Na source. The Doppler broadening coincidence spectrum was fit using a model function that included positron annihilation at rest with 1s, 2s, 2p, and valence band electrons. In-flight positron annihilation was also fit. The response functions of the detectors accounted for backscattering, combinations of Compton effects, pileup, ballistic deficit, and pulse-shaping problems. The procedure allows the quantitative determination of positron annihilation with core and valence electron intensities as well as their standard deviations directly from the experimental spectrum. The results obtained for the core and valence band electron annihilation intensities were 2.56(9)% and 97.44(9)%, respectively. These intensities are consistent with published experimental data treated by conventional analysis methods. This new procedure has the advantage of allowing one to distinguish additional effects from those associated with the detection system response function. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Royal palm tree peroxidase (RPTP) is a very stable enzyme in regards to acidity, temperature, H(2)O(2), and organic solvents. Thus, RPTP is a promising candidate for developing H(2)O(2)-sensitive biosensors for diverse applications in industry and analytical chemistry. RPTP belongs to the family of class III secretory plant peroxidases, which include horseradish peroxidase isozyme C, soybean and peanut peroxidases. Here we report the X-ray structure of native RPTP isolated from royal palm tree (Roystonea regia) refined to a resolution of 1.85 angstrom. RPTP has the same overall folding pattern of the plant peroxidase superfamily, and it contains one heme group and two calcium-binding sites in similar locations. The three-dimensional structure of RPTP was solved for a hydroperoxide complex state, and it revealed a bound 2-(N-morpholino) ethanesulfonic acid molecule (MES) positioned at a putative substrate-binding secondary site. Nine N-glycosylation sites are clearly defined in the RPTP electron-density maps, revealing for the first time conformations of the glycan chains of this highly glycosylated enzyme. Furthermore, statistical coupling analysis (SCA) of the plant peroxidase superfamily was performed. This sequence-based method identified a set of evolutionarily conserved sites that mapped to regions surrounding the heme prosthetic group. The SCA matrix also predicted a set of energetically coupled residues that are involved in the maintenance of the structural folding of plant peroxidases. The combination of crystallographic data and SCA analysis provides information about the key structural elements that could contribute to explaining the unique stability of RPTP. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
We discuss the connection between information and copula theories by showing that a copula can be employed to decompose the information content of a multivariate distribution into marginal and dependence components, with the latter quantified by the mutual information. We define the information excess as a measure of deviation from a maximum-entropy distribution. The idea of marginal invariant dependence measures is also discussed and used to show that empirical linear correlation underestimates the amplitude of the actual correlation in the case of non-Gaussian marginals. The mutual information is shown to provide an upper bound for the asymptotic empirical log-likelihood of a copula. An analytical expression for the information excess of T-copulas is provided, allowing for simple model identification within this family. We illustrate the framework in a financial data set. Copyright (C) EPLA, 2009
Resumo:
In this present work a method for the determination of Ca, Fe, Ga, Na, Si and Zn in alumina (Al(2)O(3)) by inductively coupled plasma optical emission spectrometry (ICP OES) with axial viewing is presented. Preliminary studies revealed intense aluminum spectral interference over the majority of elements and reaction between aluminum and quartz to form aluminosilicate, reducing drastically the lifetime of the torch. To overcome these problems alumina samples (250 mg) were dissolved with 5 mL HCl + 1.5 mLH(2)SO(4) + 1.5 mL H(2)O in a microwave oven. After complete dissolution the volume was completed to 20 mL and aluminum was precipitated as Al(OH)(3) with NH(3) (by bubbling NH(3) into the solution up to a pH similar to 8, for 10 min). The use of internal standards (Fe/Be, Ga/Dy, Zn/In and Na/Sc) was essential to obtain precise and accurate results. The reliability of the proposed method was checked by analysis of alumina certified reference material (Alumina Reduction Grade-699, NIST). The found concentrations (0.037%w(-1) CaO, 0.013% w w(-1) Fe(2)O(3), 0.012%w w(-1)Ga(2)O(3), 0.49% w w(-1) Na(2)O, 0.014% w w(-1) SiO(2) and 0.013% w w(-1) ZnO) presented no statistical differences compared to the certified values at a 95% confidence level. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A number of recent works have introduced statistical methods for detecting genetic loci that affect phenotypic variability, which we refer to as variability-controlling quantitative trait loci (vQTL). These are genetic variants whose allelic state predicts how much phenotype values will vary about their expected means. Such loci are of great potential interest in both human and non-human genetic studies, one reason being that a detected vQTL could represent a previously undetected interaction with other genes or environmental factors. The simultaneous publication of these new methods in different journals has in many cases precluded opportunity for comparison. We survey some of these methods, the respective trade-offs they imply, and the connections between them. The methods fall into three main groups: classical non-parametric, fully parametric, and semi-parametric two-stage approximations. Choosing between alternatives involves balancing the need for robustness, flexibility, and speed. For each method, we identify important assumptions and limitations, including those of practical importance, such as their scope for including covariates and random effects. We show in simulations that both parametric methods and their semi-parametric approximations can give elevated false positive rates when they ignore mean-variance relationships intrinsic to the data generation process. We conclude that choice of method depends on the trait distribution, the need to include non-genetic covariates, and the population size and structure, coupled with a critical evaluation of how these fit with the assumptions of the statistical model.
Resumo:
In this essay, a method for comparing the asymptotic power of the multivariate unit root tests proposed in Phillips & Durlauf (1986) and Flˆores, Preumont & Szafarz (1996) is proposed. In order to determine the asymptotic power of the tests the asymptotic distributions under the null hypothesis and under the set of alternative hypotheses described in Phillips (1988) are determined. In addition, a test which combines characteristics of both tests is proposed and its distributions under the null hypothesis and the same set of alternative hypotheses are determined. This allows us to determine what causes any difference in the asymptotic power of the two tests against the set of alternative hypotheses considered