849 resultados para large sample distributions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The occurrence of mid-latitude windstorms is related to strong socio-economic effects. For detailed and reliable regional impact studies, large datasets of high-resolution wind fields are required. In this study, a statistical downscaling approach in combination with dynamical downscaling is introduced to derive storm related gust speeds on a high-resolution grid over Europe. Multiple linear regression models are trained using reanalysis data and wind gusts from regional climate model simulations for a sample of 100 top ranking windstorm events. The method is computationally inexpensive and reproduces individual windstorm footprints adequately. Compared to observations, the results for Germany are at least as good as pure dynamical downscaling. This new tool can be easily applied to large ensembles of general circulation model simulations and thus contribute to a better understanding of the regional impact of windstorms based on decadal and climate change projections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim  This paper documents reconstructions of the vegetation patterns in Australia, Southeast Asia and the Pacific (SEAPAC region) in the mid-Holocene and at the last glacial maximum (LGM). Methods  Vegetation patterns were reconstructed from pollen data using an objective biomization scheme based on plant functional types. The biomization scheme was first tested using 535 modern pollen samples from 377 sites, and then applied unchanged to fossil pollen samples dating to 6000 ± 500 or 18,000 ± 1000 14C yr bp. Results  1. Tests using surface pollen sample sites showed that the biomization scheme is capable of reproducing the modern broad-scale patterns of vegetation distribution. The north–south gradient in temperature, reflected in transitions from cool evergreen needleleaf forest in the extreme south through temperate rain forest or wet sclerophyll forest (WSFW) and into tropical forests, is well reconstructed. The transitions from xerophytic through sclerophyll woodlands and open forests to closed-canopy forests, which reflect the gradient in plant available moisture from the continental interior towards the coast, are reconstructed with less geographical precision but nevertheless the broad-scale pattern emerges. 2. Differences between the modern and mid-Holocene vegetation patterns in mainland Australia are comparatively small and reflect changes in moisture availability rather than temperature. In south-eastern Australia some sites show a shift towards more moisture-stressed vegetation in the mid-Holocene with xerophytic woods/scrub and temperate sclerophyll woodland and shrubland at sites characterized today by WSFW or warm-temperate rain forest (WTRF). However, sites in the Snowy Mountains, on the Southern Tablelands and east of the Great Dividing Range have more moisture-demanding vegetation in the mid-Holocene than today. South-western Australia was slightly drier than today. The single site in north-western Australia also shows conditions drier than today in the mid-Holocene. Changes in the tropics are also comparatively small, but the presence of WTRF and tropical deciduous broadleaf forest and woodland in the mid-Holocene, in sites occupied today by cool-temperate rain forest, indicate warmer conditions. 3. Expansion of xerophytic vegetation in the south and tropical deciduous broadleaf forest and woodland in the north indicate drier conditions across mainland Australia at the LGM. None of these changes are informative about the degree of cooling. However the evidence from the tropics, showing lowering of the treeline and forest belts, indicates that conditions were between 1 and 9 °C (depending on elevation) colder. The encroachment of tropical deciduous broadleaf forest and woodland into lowland evergreen broadleaf forest implies greater aridity. Main conclusions  This study provides the first continental-scale reconstruction of mid-Holocene and LGM vegetation patterns from Australia, Southeast Asia and the Pacific (SEAPAC region) using an objective biomization scheme. These data will provide a benchmark for evaluation of palaeoclimate simulations within the framework of the Palaeoclimate Modelling Intercomparison Project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Though many global aerosols models prognose surface deposition, only a few models have been used to directly simulate the radiative effect from black carbon (BC) deposition to snow and sea ice. Here, we apply aerosol deposition fields from 25 models contributing to two phases of the Aerosol Comparisons between Observations and Models (AeroCom) project to simulate and evaluate within-snow BC concentrations and radiative effect in the Arctic. We accomplish this by driving the offline land and sea ice components of the Community Earth System Model with different deposition fields and meteorological conditions from 2004 to 2009, during which an extensive field campaign of BC measurements in Arctic snow occurred. We find that models generally underestimate BC concentrations in snow in northern Russia and Norway, while overestimating BC amounts elsewhere in the Arctic. Although simulated BC distributions in snow are poorly correlated with measurements, mean values are reasonable. The multi-model mean (range) bias in BC concentrations, sampled over the same grid cells, snow depths, and months of measurements, are −4.4 (−13.2 to +10.7) ng g−1 for an earlier phase of AeroCom models (phase I), and +4.1 (−13.0 to +21.4) ng g−1 for a more recent phase of AeroCom models (phase II), compared to the observational mean of 19.2 ng g−1. Factors determining model BC concentrations in Arctic snow include Arctic BC emissions, transport of extra-Arctic aerosols, precipitation, deposition efficiency of aerosols within the Arctic, and meltwater removal of particles in snow. Sensitivity studies show that the model–measurement evaluation is only weakly affected by meltwater scavenging efficiency because most measurements were conducted in non-melting snow. The Arctic (60–90° N) atmospheric residence time for BC in phase II models ranges from 3.7 to 23.2 days, implying large inter-model variation in local BC deposition efficiency. Combined with the fact that most Arctic BC deposition originates from extra-Arctic emissions, these results suggest that aerosol removal processes are a leading source of variation in model performance. The multi-model mean (full range) of Arctic radiative effect from BC in snow is 0.15 (0.07–0.25) W m−2 and 0.18 (0.06–0.28) W m−2 in phase I and phase II models, respectively. After correcting for model biases relative to observed BC concentrations in different regions of the Arctic, we obtain a multi-model mean Arctic radiative effect of 0.17 W m−2 for the combined AeroCom ensembles. Finally, there is a high correlation between modeled BC concentrations sampled over the observational sites and the Arctic as a whole, indicating that the field campaign provided a reasonable sample of the Arctic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations by the EISCAT experiments “POLAR” and Common Programme CP-3 reveal non-Maxwellian ion velocity distributions in the auroral F-region ionosphere. Analysis of data from three periods is presented. During the first period, convection velocities are large (≈2 km s-1) and constant over part of a CP-3 latitude scan; the second period is one of POLAR data containing a short-lived (<1 min.) burst of rapid (>1.5 km s-1) flow. We concentrate on these two periods as they allow the study of a great many features of the ion-neutral interactions which drive the plasma non-thermal and provide the best available experimental test for models of the 3-dimensional ion velocity distribution function. The third period is included to illustrate the fact that non-thermal plasma frequently exists in the auroral ionosphere: the data, also from the POLAR experiment, cover a three-hour period of typical auroral zone flow and analysis reveals that the ion distribution varies from Maxwellian to the threshold of a toroidal form.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent observations from the EISCAT incoherent scatter radar have revealed bursts of poleward ion flow in the dayside auroral ionosphere which are consistent with the ionospheric signature of flux transfer events at the magnetopause. These bursts frequently contain ion drifts which exceed the neutral thermal speed and, because the neutral thermospheric wind is incapable of responding sufficiently rapidly, toroidal, non-Maxwellian ion velocity distributions are expected. The EISCAT observations are made with high time resolution (15 seconds) and at a large angle to the geomagnetic field (73.5°), allowing the non-Maxwellian nature of the distribution to be observed remotely for the first time. The observed features are also strongly suggestive of a toroidal distribution: characteristic spectral shape, increased scattered power (both consistent with reduced Landau damping and enhanced electric field fluctuations) and excessively high line-of-sight ion temperatures deduced if a Maxwellian distribution is assumed. These remote sensing observations allow the evolution of the distributions to be observed. They are found to be non-Maxwellian whenever the ion drift exceeds the neutral thermal speed, indicating that such distributions can exist over the time scale of the flow burst events (several minutes).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient and robust method to measure vitamin D (25-hydroxy vitamin D3 (25(OH)D3) and 25-hydroxy vitamin D2 in dried blood spots (DBS) has been developed and applied in the pan-European multi-centre, internet-based, personalised nutrition intervention study Food4Me. The method includes calibration with blood containing endogenous 25(OH)D3, spotted as DBS and corrected for haematocrit content. The methodology was validated following international standards. The performance characteristics did not reach those of the current gold standard liquid chromatography-MS/MS in plasma for all parameters, but were found to be very suitable for status-level determination under field conditions. DBS sample quality was very high, and 3778 measurements of 25(OH)D3 were obtained from 1465 participants. The study centre and the season within the study centre were very good predictors of 25(OH)D3 levels (P<0·001 for each case). Seasonal effects were modelled by fitting a sine function with a minimum 25(OH)D3 level on 20 January and a maximum on 21 July. The seasonal amplitude varied from centre to centre. The largest difference between winter and summer levels was found in Germany and the smallest in Poland. The model was cross-validated to determine the consistency of the predictions and the performance of the DBS method. The Pearson's correlation between the measured values and the predicted values was r 0·65, and the sd of their differences was 21·2 nmol/l. This includes the analytical variation and the biological variation within subjects. Overall, DBS obtained by unsupervised sampling of the participants at home was a viable methodology for obtaining vitamin D status information in a large nutritional study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human observers exhibit large systematic distance-dependent biases when estimating the three-dimensional (3D) shape of objects defined by binocular image disparities. This has led some to question the utility of disparity as a cue to 3D shape and whether accurate estimation of 3D shape is at all possible. Others have argued that accurate perception is possible, but only with large continuous perspective transformations of an object. Using a stimulus that is known to elicit large distance-dependent perceptual bias (random dot stereograms of elliptical cylinders) we show that contrary to these findings the simple adoption of a more naturalistic viewing angle completely eliminates this bias. Using behavioural psychophysics, coupled with a novel surface-based reverse correlation methodology, we show that it is binocular edge and contour information that allows for accurate and precise perception and that observers actively exploit and sample this information when it is available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a catalogue of galaxy photometric redshifts and k-corrections for the Sloan Digital Sky Survey Data Release 7 (SDSS-DR7), available on the World Wide Web. The photometric redshifts were estimated with an artificial neural network using five ugriz bands, concentration indices and Petrosian radii in the g and r bands. We have explored our redshift estimates with different training sets, thus concluding that the best choice for improving redshift accuracy comprises the main galaxy sample (MGS), the luminous red galaxies and the galaxies of active galactic nuclei covering the redshift range 0 < z < 0.3. For the MGS, the photometric redshift estimates agree with the spectroscopic values within rms = 0.0227. The distribution of photometric redshifts derived in the range 0 < z(phot) < 0.6 agrees well with the model predictions. k-corrections were derived by calibration of the k-correct_v4.2 code results for the MGS with the reference-frame (z = 0.1) (g - r) colours. We adopt a linear dependence of k-corrections on redshift and (g - r) colours that provide suitable distributions of luminosity and colours for galaxies up to redshift z(phot) = 0.6 comparable to the results in the literature. Thus, our k-correction estimate procedure is a powerful, low computational time algorithm capable of reproducing suitable results that can be used for testing galaxy properties at intermediate redshifts using the large SDSS data base.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We studied superclusters of galaxies in a volume-limited sample extracted from the Sloan Digital Sky Survey Data Release 7 and from mock catalogues based on a semi-analytical model of galaxy evolution in the Millennium Simulation. A density field method was applied to a sample of galaxies brighter than M(r) = -21+5 log h(100) to identify superclusters, taking into account selection and boundary effects. In order to evaluate the influence of the threshold density, we have chosen two thresholds: the first maximizes the number of objects (D1) and the second constrains the maximum supercluster size to similar to 120 h(-1) Mpc (D2). We have performed a morphological analysis, using Minkowski Functionals, based on a parameter, which increases monotonically from filaments to pancakes. An anticorrelation was found between supercluster richness (and total luminosity or size) and the morphological parameter, indicating that filamentary structures tend to be richer, larger and more luminous than pancakes in both observed and mock catalogues. We have also used the mock samples to compare supercluster morphologies identified in position and velocity spaces, concluding that our morphological classification is not biased by the peculiar velocities. Monte Carlo simulations designed to investigate the reliability of our results with respect to random fluctuations show that these results are robust. Our analysis indicates that filaments and pancakes present different luminosity and size distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Large Magellanic Cloud (LMC) has a rich star cluster system spanning a wide range of ages and masses. One striking feature of the LMC cluster system is the existence of an age gap between 3 and 10 Gyr. But this feature is not clearly seen among field stars. Three LMC fields containing relatively poor and sparse clusters whose integrated colours are consistent with those of intermediate-age simple stellar populations have been imaged in BVI with the Optical Imager (SOI) at the Southern Telescope for Astrophysical Research (SOAR). A total of six clusters, five of them with estimated initial masses M < 104 M(circle dot), were studied in these fields. Photometry was performed and colour-magnitude diagrams (CMDs) were built using standard point spread function fitting methods. The faintest stars measured reach V similar to 23. The CMD was cleaned from field contamination by making use of the three-dimensional colour and magnitude space available in order to select stars in excess relative to the field. A statistical CMD comparison method was developed for this purpose. The subtraction method has proven to be successful, yielding cleaned CMDs consistent with a simple stellar population. The intermediate-age candidates were found to be the oldest in our sample, with ages between 1 and 2 Gyr. The remaining clusters found in the SOAR/SOI have ages ranging from 100 to 200 Myr. Our analysis has conclusively shown that none of the relatively low-mass clusters studied by us belongs to the LMC age gap.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a new spectroscopic technique for measuring radial metallicity gradients out to large galactocentric radii. We use the DEep Imaging Multi-Object Spectrograph (DEIMOS) on the Keck telescope and the galaxy spectrum extraction technique of Proctor et al. We also make use of the metallicity sensitive near-infrared Calcium ii triplet (CaT) features together with single stellar population models to obtain metallicities. Our technique is applied as a pilot study to a sample of three relatively nearby (< 30 Mpc) intermediate-mass to massive early-type galaxies. Results are compared with previous literature inner region values and generally show good agreement. We also include a comparison with profiles from dissipational disc-disc major merger simulations. Based on our new extended metallicity gradients combined with other observational evidence and theoretical predictions, we discuss possible formation scenarios for the galaxies in our sample. The limitations of our new technique are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Activities involving fauna monitoring are usually limited by the lack of resources; therefore, the choice of a proper and efficient methodology is fundamental to maximize the cost-benefit ratio. Both direct and indirect methods can be used to survey mammals, but the latter are preferred due to the difficulty to come in sight of and/or to capture the individuals, besides being cheaper. We compared the performance of two methods to survey medium and large-sized mammal: track plot recording and camera trapping, and their costs were assessed. At Jatai Ecological Station (S21 degrees 31`15 ``- W47 degrees 34`42 ``-Brazil) we installed ten camera traps along a dirt road directly in front of ten track plots, and monitored them for 10 days. We cleaned the plots, adjusted the cameras, and noted down the recorded species daily. Records taken by both methods showed they sample the local richness in different ways (Wilcoxon, T=231; p;;0.01). The track plot method performed better on registering individuals whereas camera trapping provided records which permitted more accurate species identification. The type of infra-red sensor camera used showed a strong bias towards individual body mass (R(2)=0.70; p=0.017), and the variable expenses of this method in a 10-day survey were estimated about 2.04 times higher compared to track plot method; however, in a long run camera trapping becomes cheaper than track plot recording. Concluding, track plot recording is good enough for quick surveys under a limited budget, and camera trapping is best for precise species identification and the investigation of species details, performing better for large animals. When used together, these methods can be complementary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reconstruction of Extensive Air Showers (EAS) observed by particle detectors at the ground is based on the characteristics of observables like the lateral particle density and the arrival times. The lateral densities, inferred for different EAS components from detector data, are usually parameterised by applying various lateral distribution functions (LDFs). The LDFs are used in turn for evaluating quantities like the total number of particles or the density at particular radial distances. Typical expressions for LDFs anticipate azimuthal symmetry of the density around the shower axis. The deviations of the lateral particle density from this assumption arising from various reasons are smoothed out in the case of compact arrays like KASCADE, but not in the case of arrays like Grande, which only sample a smaller part of the azimuthal variation. KASCADE-Grande, an extension of the former KASCADE experiment, is a multi-component Extensive Air Shower (EAS) experiment located at the Karlsruhe Institute of Technology (Campus North), Germany. The lateral distributions of charged particles are deduced from the basic information provided by the Grande scintillators - the energy deposits - first in the observation plane, then in the intrinsic shower plane. In all steps azimuthal dependences should be taken into account. As the energy deposit in the scintillators is dependent on the angles of incidence of the particles, azimuthal dependences are already involved in the first step: the conversion from the energy deposits to the charged particle density. This is done by using the Lateral Energy Correction Function (LECF) that evaluates the mean energy deposited by a charged particle taking into account the contribution of other particles (e.g. photons) to the energy deposit. By using a very fast procedure for the evaluation of the energy deposited by various particles we prepared realistic LECFs depending on the angle of incidence of the shower and on the radial and azimuthal coordinates of the location of the detector. Mapping the lateral density from the observation plane onto the intrinsic shower plane does not remove the azimuthal dependences arising from geometric and attenuation effects, in particular for inclined showers. Realistic procedures for applying correction factors are developed. Specific examples of the bias due to neglecting the azimuthal asymmetries in the conversion from the energy deposit in the Grande detectors to the lateral density of charged particles in the intrinsic shower plane are given. (C) 2011 Elsevier B.V. All rights reserved.