933 resultados para Statistical factora analysis
Resumo:
Geophysical time series sometimes exhibit serial correlations that are stronger than can be captured by the commonly used first‐order autoregressive model. In this study we demonstrate that a power law statistical model serves as a useful upper bound for the persistence of total ozone anomalies on monthly to interannual timescales. Such a model is usually characterized by the Hurst exponent. We show that the estimation of the Hurst exponent in time series of total ozone is sensitive to various choices made in the statistical analysis, especially whether and how the deterministic (including periodic) signals are filtered from the time series, and the frequency range over which the estimation is made. In particular, care must be taken to ensure that the estimate of the Hurst exponent accurately represents the low‐frequency limit of the spectrum, which is the part that is relevant to long‐term correlations and the uncertainty of estimated trends. Otherwise, spurious results can be obtained. Based on this analysis, and using an updated equivalent effective stratospheric chlorine (EESC) function, we predict that an increase in total ozone attributable to EESC should be detectable at the 95% confidence level by 2015 at the latest in southern midlatitudes, and by 2020–2025 at the latest over 30°–45°N, with the time to detection increasing rapidly with latitude north of this range.
Resumo:
Simulations of 15 coupled chemistry climate models, for the period 1960–2100, are presented. The models include a detailed stratosphere, as well as including a realistic representation of the tropospheric climate. The simulations assume a consistent set of changing greenhouse gas concentrations, as well as temporally varying chlorofluorocarbon concentrations in accordance with observations for the past and expectations for the future. The ozone results are analyzed using a nonparametric additive statistical model. Comparisons are made with observations for the recent past, and the recovery of ozone, indicated by a return to 1960 and 1980 values, is investigated as a function of latitude. Although chlorine amounts are simulated to return to 1980 values by about 2050, with only weak latitudinal variations, column ozone amounts recover at different rates due to the influence of greenhouse gas changes. In the tropics, simulated peak ozone amounts occur by about 2050 and thereafter total ozone column declines. Consequently, simulated ozone does not recover to values which existed prior to the early 1980s. The results also show a distinct hemispheric asymmetry, with recovery to 1980 values in the Northern Hemisphere extratropics ahead of the chlorine return by about 20 years. In the Southern Hemisphere midlatitudes, ozone is simulated to return to 1980 levels only 10 years ahead of chlorine. In the Antarctic, annually averaged ozone recovers at about the same rate as chlorine in high latitudes and hence does not return to 1960s values until the last decade of the simulations.
Resumo:
In this paper, we develop a method, termed the Interaction Distribution (ID) method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2), qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.
Resumo:
A novel diagnostic tool is presented, based on polar-cap temperature anomalies, for visualizing daily variability of the Arctic stratospheric polar vortex over multiple decades. This visualization illustrates the ubiquity of extended-time-scale recoveries from stratospheric sudden warmings, termed here polar-night jet oscillation (PJO) events. These are characterized by an anomalously warm polar lower stratosphere that persists for several months. Following the initial warming, a cold anomaly forms in the middle stratosphere, as does an anomalously high stratopause, both of which descend while the lower-stratospheric anomaly persists. These events are characterized in four datasets: Microwave Limb Sounder (MLS) temperature observations; the 40-yr ECMWF Re-Analysis (ERA-40) and Modern Era Retrospective Analysis for Research and Applications (MERRA) reanalyses; and an ensemble of three 150-yr simulations from the Canadian Middle Atmosphere Model. The statistics of PJO events in the model are found to agree very closely with those of the observations and reanalyses. The time scale for the recovery of the polar vortex following sudden warmings correlates strongly with the depth to which the warming initially descends. PJO events occur following roughly half of all major sudden warmings and are associated with an extended period of suppressed wave-activity fluxes entering the polar vortex. They follow vortex splits more frequently than they do vortex displacements. They are also related to weak vortex events as identified by the northern annular mode; in particular, those weak vortex events followed by a PJO event show a stronger tropospheric response. The long time scales, predominantly radiative dynamics, and tropospheric influence of PJO events suggest that they represent an important source of conditional skill in seasonal forecasting.
Resumo:
(ABR) is of fundamental importance to the investiga- tion of the auditory system behavior, though its in- terpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identi- fication of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave la- tency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a me- thod for evaluating the relation between the responses given by the examiners. Results: The analysis sug- gests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the ob- tained wave latency differences and 18% of the inves- tigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.
First order k-th moment finite element analysis of nonlinear operator equations with stochastic data
Resumo:
We develop and analyze a class of efficient Galerkin approximation methods for uncertainty quantification of nonlinear operator equations. The algorithms are based on sparse Galerkin discretizations of tensorized linearizations at nominal parameters. Specifically, we consider abstract, nonlinear, parametric operator equations J(\alpha ,u)=0 for random input \alpha (\omega ) with almost sure realizations in a neighborhood of a nominal input parameter \alpha _0. Under some structural assumptions on the parameter dependence, we prove existence and uniqueness of a random solution, u(\omega ) = S(\alpha (\omega )). We derive a multilinear, tensorized operator equation for the deterministic computation of k-th order statistical moments of the random solution's fluctuations u(\omega ) - S(\alpha _0). We introduce and analyse sparse tensor Galerkin discretization schemes for the efficient, deterministic computation of the k-th statistical moment equation. We prove a shift theorem for the k-point correlation equation in anisotropic smoothness scales and deduce that sparse tensor Galerkin discretizations of this equation converge in accuracy vs. complexity which equals, up to logarithmic terms, that of the Galerkin discretization of a single instance of the mean field problem. We illustrate the abstract theory for nonstationary diffusion problems in random domains.
Resumo:
The growing energy consumption in the residential sector represents about 30% of global demand. This calls for Demand Side Management solutions propelling change in behaviors of end consumers, with the aim to reduce overall consumption as well as shift it to periods in which demand is lower and where the cost of generating energy is lower. Demand Side Management solutions require detailed knowledge about the patterns of energy consumption. The profile of electricity demand in the residential sector is highly correlated with the time of active occupancy of the dwellings; therefore in this study the occupancy patterns in Spanish properties was determined using the 2009–2010 Time Use Survey (TUS), conducted by the National Statistical Institute of Spain. The survey identifies three peaks in active occupancy, which coincide with morning, noon and evening. This information has been used to input into a stochastic model which generates active occupancy profiles of dwellings, with the aim to simulate domestic electricity consumption. TUS data were also used to identify which appliance-related activities could be considered for Demand Side Management solutions during the three peaks of occupancy.
Resumo:
As in any field of scientific inquiry, advancements in the field of second language acquisition (SLA) rely in part on the interpretation and generalizability of study findings using quantitative data analysis and inferential statistics. While statistical techniques such as ANOVA and t-tests are widely used in second language research, this review article provides a review of a class of newer statistical models that have not yet been widely adopted in the field, but have garnered interest in other fields of language research. The class of statistical models called mixed-effects models are introduced, and the potential benefits of these models for the second language researcher are discussed. A simple example of mixed-effects data analysis using the statistical software package R (R Development Core Team, 2011) is provided as an introduction to the use of these statistical techniques, and to exemplify how such analyses can be reported in research articles. It is concluded that mixed-effects models provide the second language researcher with a powerful tool for the analysis of a variety of types of second language acquisition data.
Resumo:
The protein encoded by the PPARGC1A gene is expressed at high levels in metabolically active tissues and is involved in the control of oxidative stress via reactive oxygen species detoxification. Several recent reports suggest that the PPARGC1A Gly482Ser (rs8192678) missense polymorphism may relate inversely with blood pressure. We used conventional meta-analysis methods to assess the association between Gly482Ser and systolic (SBP) or diastolic blood pressures (DBP) or hypertension in 13,949 individuals from 17 studies, of which 6,042 were previously unpublished observations. The studies comprised cohorts of white European, Asian, and American Indian adults, and adolescents from South America. Stratified analyses were conducted to control for population stratification. Pooled genotype frequencies were 0.47 (Gly482Gly), 0.42 (Gly482Ser), and 0.11 (Ser482Ser). We found no evidence of association between Gly482Ser and SBP [Gly482Gly: mean = 131.0 mmHg, 95% confidence interval (CI) = 130.5-131.5 mmHg; Gly482Ser mean = 133.1 mmHg, 95% CI = 132.6-133.6 mmHg; Ser482Ser: mean = 133.5 mmHg, 95% CI = 132.5-134.5 mmHg; P = 0.409] or DBP (Gly482Gly: mean = 80.3 mmHg, 95% CI = 80.0-80.6 mmHg; Gly482Ser mean = 81.5 mmHg, 95% CI = 81.2-81.8 mmHg; Ser482Ser: mean = 82.1 mmHg, 95% CI = 81.5-82.7 mmHg; P = 0.651). Contrary to previous reports, we did not observe significant effect modification by sex (SBP, P = 0.966; DBP, P = 0.715). We were also unable to confirm the previously reported association between the Ser482 allele and hypertension [odds ratio: 0.97, 95% CI = 0.87-1.08, P = 0.585]. These results were materially unchanged when analyses were focused on whites only. However, statistical evidence of gene-age interaction was apparent for DBP [Gly482Gly: 73.5 (72.8, 74.2), Gly482Ser: 77.0 (76.2, 77.8), Ser482Ser: 79.1 (77.4, 80.9), P = 4.20 x 10(-12)] and SBP [Gly482Gly: 121.4 (120.4, 122.5), Gly482Ser: 125.9 (124.6, 127.1), Ser482Ser: 129.2 (126.5, 131.9), P = 7.20 x 10(-12)] in individuals <50 yr (n = 2,511); these genetic effects were absent in those older than 50 yr (n = 5,088) (SBP, P = 0.41; DBP, P = 0.51). Our findings suggest that the PPARGC1A Ser482 allele may be associated with higher blood pressure, but this is only apparent in younger adults.
Resumo:
This chapter applies rigorous statistical analysis to existing datasets of medieval exchange rates quoted in merchants’ letters sent from Barcelona, Bruges and Venice between 1380 and 1310, which survive in the archive of Francesco di Marco Datini of Prato. First, it tests the exchange rates for stationarity. Second, it uses regression analysis to examine the seasonality of exchange rates at the three financial centres and compares them against contemporary descriptions by the merchant Giovanni di Antonio da Uzzano. Third, it tests for structural breaks in the exchange rate series.
Resumo:
We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.
Resumo:
We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
The aim of this study was to determine whether geographical differences impact the composition of bacterial communities present in the airways of cystic fibrosis (CF) patients attending CF centers in the United States or United Kingdom. Thirty-eight patients were matched on the basis of clinical parameters into 19 pairs comprised of one U.S. and one United Kingdom patient. Analysis was performed to determine what, if any, bacterial correlates could be identified. Two culture-independent strategies were used: terminal restriction fragment length polymorphism (T-RFLP) profiling and 16S rRNA clone sequencing. Overall, 73 different terminal restriction fragment lengths were detected, ranging from 2 to 10 for U.S. and 2 to 15 for United Kingdom patients. The statistical analysis of T-RFLP data indicated that patient pairing was successful and revealed substantial transatlantic similarities in the bacterial communities. A small number of bands was present in the vast majority of patients in both locations, indicating that these are species common to the CF lung. Clone sequence analysis also revealed that a number of species not traditionally associated with the CF lung were present in both sample groups. The species number per sample was similar, but differences in species presence were observed between sample groups. Cluster analysis revealed geographical differences in bacterial presence and relative species abundance. Overall, the U.S. samples showed tighter clustering with each other compared to that of United Kingdom samples, which may reflect the lower diversity detected in the U.S. sample group. The impact of cross-infection and biogeography is considered, and the implications for treating CF lung infections also are discussed.
Resumo:
The paper analyses the impact of a priori determinants of biosecurity behaviour of farmers in Great Britain. We use a dataset collected through a stratified telephone survey of 900 cattle and sheep farmers in Great Britain (400 in England and a further 250 in Wales and Scotland respectively) which took place between 25 March 2010 and 18 June 2010. The survey was stratified by farm type, farm size and region. To test the influence of a priori determinants on biosecurity behaviour we used a behavioural economics method, structural equation modelling (SEM) with observed and latent variables. SEM is a statistical technique for testing and estimating causal relationships amongst variables, some of which may be latent using a combination of statistical data and qualitative causal assumptions. Thirteen latent variables were identified and extracted, expressing the behaviour and the underlying determining factors. The variables were: experience, economic factors, organic certification of farm, membership in a cattle/sheep health scheme, perceived usefulness of biosecurity information sources, knowledge about biosecurity measures, perceived importance of specific biosecurity strategies, perceived effect (on farm business in the past five years) of welfare/health regulation, perceived effect of severe outbreaks of animal diseases, attitudes towards livestock biosecurity, attitudes towards animal welfare, influence on decision to apply biosecurity measures and biosecurity behaviour. The SEM model applied on the Great Britain sample has an adequate fit according to the measures of absolute, incremental and parsimonious fit. The results suggest that farmers’ perceived importance of specific biosecurity strategies, organic certification of farm, knowledge about biosecurity measures, attitudes towards animal welfare, perceived usefulness of biosecurity information sources, perceived effect on business during the past five years of severe outbreaks of animal diseases, membership in a cattle/sheep health scheme, attitudes towards livestock biosecurity, influence on decision to apply biosecurity measures, experience and economic factors are significantly influencing behaviour (overall explaining 64% of the variance in behaviour).