899 resultados para Data distribution
Resumo:
We present a new methodology that couples neutron diffraction experiments over a wide Q range with single chain modelling in order to explore, in a quantitative manner, the intrachain organization of non-crystalline polymers. The technique is based on the assignment of parameters describing the chemical, geometric and conformational characteristics of the polymeric chain, and on the variation of these parameters to minimize the difference between the predicted and experimental diffraction patterns. The method is successfully applied to the study of molten poly(tetrafluoroethylene) at two different temperatures, and provides unambiguous information on the configuration of the chain and its degree of flexibility. From analysis of the experimental data a model is derived with CC and CF bond lengths of 1.58 and 1.36 Å, respectively, a backbone valence angle of 110° and a torsional angle distribution which is characterized by four isometric states, namely a split trans state at ± 18°, giving rise to a helical chain conformation, and two gauche states at ± 112°. The probability of trans conformers is 0.86 at T = 350°C, which decreases slightly to 0.84 at T = 400°C. Correspondingly, the chain segments are characterized by long all-trans sequences with random changes in sign, rather anisotropic in nature, which give rise to a rather stiff chain. We compare the results of this quantitative analysis of the experimental scattering data with the theoretical predictions of both force fields and molecular orbital conformation energy calculations.
Resumo:
The rapid growth of non-listed real estate funds over the last several years has contributed towards establishing this sector as a major investment vehicle for gaining exposure to commercial real estate. Academic research has not kept up with this development, however, as there are still only a few published studies on non-listed real estate funds. This paper aims to identify the factors driving the total return over a seven-year period. Influential factors tested in our analysis include the weighted underlying direct property returns in each country and sector as well as fund size, investment style gearing and the distribution yield. Furthermore, we analyze the interaction of non-listed real estate funds with the performance of the overall economy and that of competing asset classes and found that lagged GDP growth and stock market returns as well as contemporaneous government bond rates are significant and positive predictors of annual fund performance.
Resumo:
A range of physiological parameters (canopy light transmission, canopy shape, leaf size, flowering and flushing intensity) were measured from the International Clone Trial, typically over the course of two years. Data were collected from six locations, these being: Brazil, Ecuador, Trinidad, Venezuela, Côte d’Ivoire and Ghana. Canopy shape varied significantly between clones, although it showed little variation between locations. Genotypic variation in leaf size was differentially affected by the growth location; such differences appeared to underlie a genotype by environment interaction in relation to canopy light transmission. Flushing data were recorded at monthly intervals over the course of a year. Within each location, a significant interaction was observed between genotype and time of year, suggesting that some genotypes respond to a greater extent than others to environmental stimuli. A similar interaction was observed for flowering data, where significant correlations were found between flowering intensity and temperature in Brazil and flowering intensity and rainfall in Côte d’Ivoire. The results demonstrate the need for local evaluation of cocoa clones and also suggest that the management practices for particular planting material may need to be fine-tuned to the location in which they are cultivated.
Resumo:
The collection of wind speed time series by means of digital data loggers occurs in many domains, including civil engineering, environmental sciences and wind turbine technology. Since averaging intervals are often significantly larger than typical system time scales, the information lost has to be recovered in order to reconstruct the true dynamics of the system. In the present work we present a simple algorithm capable of generating a real-time wind speed time series from data logger records containing the average, maximum, and minimum values of the wind speed in a fixed interval, as well as the standard deviation. The signal is generated from a generalized random Fourier series. The spectrum can be matched to any desired theoretical or measured frequency distribution. Extreme values are specified through a postprocessing step based on the concept of constrained simulation. Applications of the algorithm to 10-min wind speed records logged at a test site at 60 m height above the ground show that the recorded 10-min values can be reproduced by the simulated time series to a high degree of accuracy.
Resumo:
The effect of the surrounding lower buildings on the wind pressure distribution on a high-rise building is investigated by computational fluid dynamics (CFD). When B/H=0.1, it is found that the wind pressure on the windward side was reduced especially on the lower part, but for different layers of surrounding buildings, there was no great difference, which agrees with our previous wind tunnel experiment data. Then we changed the aspect ratio from 0.1 to 2, to represent different airflow regimes: skimming flow (SF), and wake interference (WI). It shows that the average Cp increases when B/H increases. For different air flow regimes, it is found that insignificant difference exists when the number of the building layers is more than 2. From the engineering point of view, it is sufficient to only include the first layer for natural ventilation design by using CFD simulation or wind tunnel experiment.
Resumo:
The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.
Resumo:
This paper proposes a method for describing the distribution of observed temperatures on any day of the year such that the distribution and summary statistics of interest derived from the distribution vary smoothly through the year. The method removes the noise inherent in calculating summary statistics directly from the data thus easing comparisons of distributions and summary statistics between different periods. The method is demonstrated using daily effective temperatures (DET) derived from observations of temperature and wind speed at De Bilt, Holland. Distributions and summary statistics are obtained from 1985 to 2009 and compared to the period 1904–1984. A two-stage process first obtains parameters of a theoretical probability distribution, in this case the generalized extreme value (GEV) distribution, which describes the distribution of DET on any day of the year. Second, linear models describe seasonal variation in the parameters. Model predictions provide parameters of the GEV distribution, and therefore summary statistics, that vary smoothly through the year. There is evidence of an increasing mean temperature, a decrease in the variability in temperatures mainly in the winter and more positive skew, more warm days, in the summer. In the winter, the 2% point, the value below which 2% of observations are expected to fall, has risen by 1.2 °C, in the summer the 98% point has risen by 0.8 °C. Medians have risen by 1.1 and 0.9 °C in winter and summer, respectively. The method can be used to describe distributions of future climate projections and other climate variables. Further extensions to the methodology are suggested.
Resumo:
Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.
Resumo:
In this paper, we develop a method, termed the Interaction Distribution (ID) method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2), qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.
Resumo:
Background and Aims Leafy vegetable Brassica crops are an important source of dietary calcium (Ca) and magnesium (Mg) and represent potential targets for increasing leaf Ca and Mg concentrations through agronomy or breeding. Although the internal distribution of Ca and Mg within leaves affects the accumulation of these elements, such data are not available for Brassica. The aim of this study was to characterize the internal distribution of Ca and Mg in the leaves of a vegetable Brassica and to determine the effects of altered exogenous Ca and Mg supply on this distribution. Methods Brassica rapa ssp. trilocularis ‘R-o-18’ was grown at four different Ca:Mg treatments for 21 d in a controlled environment. Concentrations of Ca and Mg were determined in fully expanded leaves using inductively coupled plasma-mass spectrometry (ICP-MS). Internal distributions of Ca and Mg were determined in transverse leaf sections at the base and apex of leaves using energy-dispersive X-ray spectroscopy (EDS) with cryo-scanning electron microscopy (cryo-SEM). Key Results Leaf Ca and Mg concentrations were greatest in palisade and spongy mesophyll cells, respectively, although this was dependent on exogenous supply. Calcium accumulation in palisade mesophyll cells was enhanced slightly under high Mg supply; in contrast, Mg accumulation in spongy mesophyll cells was not affected by Ca supply. Conclusions The results are consistent with Arabidopsis thaliana and other Brassicaceae, providing phenotypic evidence that conserved mechanisms regulate leaf Ca and Mg distribution at a cellular scale. The future study of Arabidopsis gene orthologues in mutants of this reference B. rapa genotype will improve our understanding of Ca and Mg homeostasis in plants and may provide a model-to-crop translation pathway for targeted breeding.
Resumo:
Aim Species distribution models (SDMs) based on current species ranges underestimate the potential distribution when projected in time and/or space. A multi-temporal model calibration approach has been suggested as an alternative, and we evaluate this using 13,000 years of data. Location Europe. Methods We used fossil-based records of presence for Picea abies, Abies alba and Fagus sylvatica and six climatic variables for the period 13,000 to 1000 yr bp. To measure the contribution of each 1000-year time step to the total niche of each species (the niche measured by pooling all the data), we employed a principal components analysis (PCA) calibrated with data over the entire range of possible climates. Then we projected both the total niche and the partial niches from single time frames into the PCA space, and tested if the partial niches were more similar to the total niche than random. Using an ensemble forecasting approach, we calibrated SDMs for each time frame and for the pooled database. We projected each model to current climate and evaluated the results against current pollen data. We also projected all models into the future. Results Niche similarity between the partial and the total-SDMs was almost always statistically significant and increased through time. SDMs calibrated from single time frames gave different results when projected to current climate, providing evidence of a change in the species realized niches through time. Moreover, they predicted limited climate suitability when compared with the total-SDMs. The same results were obtained when projected to future climates. Main conclusions The realized climatic niche of species differed for current and future climates when SDMs were calibrated considering different past climates. Building the niche as an ensemble through time represents a way forward to a better understanding of a species' range and its ecology in a changing climate.
Resumo:
This paper considers the effect of using a GARCH filter on the properties of the BDS test statistic as well as a number of other issues relating to the application of the test. It is found that, for certain values of the user-adjustable parameters, the finite sample distribution of the test is far-removed from asymptotic normality. In particular, when data generated from some completely different model class are filtered through a GARCH model, the frequency of rejection of iid falls, often substantially. The implication of this result is that it might be inappropriate to use non-rejection of iid of the standardised residuals of a GARCH model as evidence that the GARCH model ‘fits’ the data.
Resumo:
The response of ten atmospheric general circulation models to orbital forcing at 6 kyr BP has been investigated using the BIOME model, which predicts equilibrium vegetation distribution, as a diagnostic. Several common features emerge: (a) reduced tropical rain forest as a consequence of increased aridity in the equatorial zone, (b) expansion of moisture-demanding vegetation in the Old World subtropics as a consequence of the expansion of the Afro–Asian monsoon, (c) an increase in warm grass/shrub in the Northern Hemisphere continental interiors in response to warming and enhanced aridity, and (d) a northward shift in the tundra–forest boundary in response to a warmer growing season at high northern latitudes. These broadscale features are consistent from model to model, but there are differences in their expression at a regional scale. Vegetation changes associated with monsoon enhancement and high-latitude summer warming are consistent with palaeoenvironmental observations, but the simulated shifts in vegetation belts are too small in both cases. Vegetation changes due to warmer and more arid conditions in the midcontinents of the Northern Hemisphere are consistent with palaeoenvironmental data from North America, but data from Eurasia suggests conditions were wetter at 6 kyr BP than today. The models show quantitatively similar vegetation changes in the intertropical zone, and in the northern and southern extratropics. The small differences among models in the magnitude of the global vegetation response are not related to differences in global or zonal climate averages, but reflect differences in simulated regional features. Regional-scale analyses will therefore be necessary to identify the underlying causes of such differences among models.
Resumo:
New compilations of African pollen and lake data are compared with climate (CCM1, NCAR, Boulder) and vegetation (BIOME 1.2, GSG, Lund) simulations for the last glacial maximum (LGM) and early to mid-Holocene (EMH). The simulated LGM climate was ca 4°C colder and drier than present, with maximum reduction in precipitation in semi-arid regions. Biome simulations show lowering of montane vegetation belts and expansion of southern xerophytic associations, but no change in the distribution of deserts and tropical rain forests. The lakes show LGM conditions similar or drier than present throughout northern and tropical Africa. Pollen data indicate lowering of montane vegetation belts, the stability of the Sahara, and a reduction of rain forest. The paleoenvironmental data are consistent with the simulated changes in temperature and moisture budgets, although they suggest the climate model underestimates equatorial aridity. EMH simulations show temperatures slightly less than present and increased monsoonal precipitation in the eastern Sahara and East Africa. Biome simulations show an upward shift of montane vegetation belts, fragmentation of xerophytic vegetation in southern Africa, and a major northward shift of the southern margin of the eastern Sahara. The lakes indicate conditions wetter than present across northern Africa. Pollen data show an upward shift of the montane forests, the northward shift of the southern margin of the Sahara, and a major extension of tropical rain forest. The lake and pollen data confirm monsoon expansion in eastern Africa, but the climate model fails to simulate the wet conditions in western Africa.
Resumo:
We propose a new class of neurofuzzy construction algorithms with the aim of maximizing generalization capability specifically for imbalanced data classification problems based on leave-one-out (LOO) cross validation. The algorithms are in two stages, first an initial rule base is constructed based on estimating the Gaussian mixture model with analysis of variance decomposition from input data; the second stage carries out the joint weighted least squares parameter estimation and rule selection using orthogonal forward subspace selection (OFSS)procedure. We show how different LOO based rule selection criteria can be incorporated with OFSS, and advocate either maximizing the leave-one-out area under curve of the receiver operating characteristics, or maximizing the leave-one-out Fmeasure if the data sets exhibit imbalanced class distribution. Extensive comparative simulations illustrate the effectiveness of the proposed algorithms.