899 resultados para range analysis
Resumo:
This paper considers the contribution of pollen analysis to conservation strategies aimed at restoring planted ancient woodland. Pollen and charcoal data are presented from organic deposits located adjacent to the Wentwood, a large planted ancient woodland in southeast Wales. Knowledge of the ecosystems preceding conifer planting can assist in restoring ancient woodlands by placing fragmented surviving ancient woodland habitats in a broader ecological, historical and cultural context. These habitats derive largely from secondary woodland that regenerated in the 3rd–5th centuries A.D. following largescale clearance of Quercus-Corylus woodland during the Romano-British period. Woodland regeneration favoured Fraxinus and Betula. Wood pasture and common land dominated the Wentwood during the medieval period until the enclosures of the 17th century. Surviving ancient woodland habitats contain an important Fagus component that probably reflects an earlier phase of planting preceding conifer planting in the 1880s. It is recommended that restoration measures should not aim to recreate static landscapes or woodland that existed under natural conditions. Very few habitats within the Wentwood can be considered wholly natural because of the long history of human impact. In these circumstances, restoration should focus on restoring those elements of the cultural landscape that are of most benefit to a range of flora and fauna, whilst taking into account factors that present significant issues for future conservation management, such as the adverse effects from projected climate change.
Resumo:
Two different ways of performing low-energy electron diffraction (LEED) structure determinations for the p(2 x 2) structure of oxygen on Ni {111} are compared: a conventional LEED-IV structure analysis using integer and fractional-order IV-curves collected at normal incidence and an analysis using only integer-order IV-curves collected at three different angles of incidence. A clear discrimination between different adsorption sites can be achieved by the latter approach as well as the first and the best fit structures of both analyses are within each other's error bars (all less than 0.1 angstrom). The conventional analysis is more sensitive to the adsorbate coordinates and lateral parameters of the substrate atoms whereas the integer-order-based analysis is more sensitive to the vertical coordinates of substrate atoms. Adsorbate-related contributions to the intensities of integer-order diffraction spots are independent of the state of long-range order in the adsorbate layer. These results show, therefore, that for lattice-gas disordered adsorbate layers, for which only integer-order spots are observed, similar accuracy and reliability can be achieved as for ordered adsorbate layers, provided the data set is large enough.
Resumo:
We present an extensive thermodynamic analysis of a hysteresis experiment performed on a simplified yet Earth-like climate model. We slowly vary the solar constant by 20% around the present value and detect that for a large range of values of the solar constant the realization of snowball or of regular climate conditions depends on the history of the system. Using recent results on the global climate thermodynamics, we show that the two regimes feature radically different properties. The efficiency of the climate machine monotonically increases with decreasing solar constant in present climate conditions, whereas the opposite takes place in snowball conditions. Instead, entropy production is monotonically increasing with the solar constant in both branches of climate conditions, and its value is about four times larger in the warm branch than in the corresponding cold state. Finally, the degree of irreversibility of the system, measured as the fraction of excess entropy production due to irreversible heat transport processes, is much higher in the warm climate conditions, with an explosive growth in the upper range of the considered values of solar constants. Whereas in the cold climate regime a dominating role is played by changes in the meridional albedo contrast, in the warm climate regime changes in the intensity of latent heat fluxes are crucial for determining the observed properties. This substantiates the importance of addressing correctly the variations of the hydrological cycle in a changing climate. An interpretation of the climate transitions at the tipping points based upon macro-scale thermodynamic properties is also proposed. Our results support the adoption of a new generation of diagnostic tools based on the second law of thermodynamics for auditing climate models and outline a set of parametrizations to be used in conceptual and intermediate-complexity models or for the reconstruction of the past climate conditions. Copyright © 2010 Royal Meteorological Society
Resumo:
This study assesses the current state of adult skeletal age-at-death estimation in biological anthropology through analysis of data published in recent research articles from three major anthropological and archaeological journals (2004–2009). The most commonly used adult ageing methods, age of ‘adulthood’, age ranges and the maximum age reported for ‘mature’ adults were compared. The results showed a wide range of variability in the age at which individuals were determined to be adult (from 14 to 25 years), uneven age ranges, a lack of standardisation in the use of descriptive age categories and the inappropriate application of some ageing methods for the sample being examined. Such discrepancies make comparisons between skeletal samples difficult, while the inappropriate use of some techniques make the resultant age estimations unreliable. At a time when national and even global comparisons of past health are becoming prominent, standardisation in the terminology and age categories used to define adults within each sample is fundamental. It is hoped that this research will prompt discussions in the osteological community (both nationally and internationally) about what defines an ‘adult’, how to standardise the age ranges that we use and how individuals should be assigned to each age category. Skeletal markers have been proposed to help physically identify ‘adult’ individuals.
Resumo:
The M protein of coronavirus plays a central role in virus assembly, turning cellular membranes into workshops where virus and host factors come together to make new virus particles. We investigated how M structure and organization is related to virus shape and size using cryo-electron microscopy, tomography and statistical analysis. We present evidence that suggests M can adopt two conformations and that membrane curvature is regulated by one M conformer. Elongated M protein is associated with rigidity, clusters of spikes and a relatively narrow range of membrane curvature. In contrast, compact M protein is associated with flexibility and low spike density. Analysis of several types of virus-like particles and virions revealed that S protein, N protein and genomic RNA each help to regulate virion size and variation, presumably through interactions with M. These findings provide insight into how M protein functions to promote virus assembly.
Resumo:
A rapid thiolytic degradation and cleanup procedure was developed for analyzing tannins directly in chlorophyll-containing sainfoin (Onobrychis viciifolia) plants. The technique proved suitable for complex tannin mixtures containing catechin, epicatechin, gallocatechin, and epigallocatechin flavan-3-ol units. The reaction time was standardized at 60 min to minimize the loss of structural information as a result of epimerization and degradation of terminal flavan-3-ol units. The results were evaluated by separate analysis of extractable and unextractable tannins, which accounted for 63.6−113.7% of the in situ plant tannins. It is of note that 70% aqueous acetone extracted tannins with a lower mean degree of polymerization (mDP) than was found for tannins analyzed in situ. Extractable tannins had between 4 and 29 lower mDP values. The method was validated by comparing results from individual and mixed sample sets. The tannin composition of different sainfoin accessions covered a range of mDP values from 16 to 83, procyanidin/prodelphinidin (PC/PD) ratios from 19.2/80.8 to 45.6/54.4, and cis/trans ratios from 74.1/25.9 to 88.0/12.0. This is the first high-throughput screening method that is suitable for analyzing condensed tannin contents and structural composition directly in green plant tissue.
Resumo:
The mean state, variability and extreme variability of the stratospheric polar vortices, with an emphasis on the Northern Hemisphere vortex, are examined using 2-dimensional moment analysis and Extreme Value Theory (EVT). The use of moments as an analysis to ol gives rise to information about the vortex area, centroid latitude, aspect ratio and kurtosis. The application of EVT to these moment derived quantaties allows the extreme variability of the vortex to be assessed. The data used for this study is ECMWF ERA-40 potential vorticity fields on interpolated isentropic surfaces that range from 450K-1450K. Analyses show that the most extreme vortex variability occurs most commonly in late January and early February, consistent with when most planetary wave driving from the troposphere is observed. Composites around sudden stratospheric warming (SSW) events reveal that the moment diagnostics evolve in statistically different ways between vortex splitting events and vortex displacement events, in contrast to the traditional diagnostics. Histograms of the vortex diagnostics on the 850K (∼10hPa) surface over the 1958-2001 period are fitted with parametric distributions, and show that SSW events comprise the majority of data in the tails of the distributions. The distribution of each diagnostic is computed on various surfaces throughout the depth of the stratosphere, and shows that in general the vortex becomes more circular with higher filamentation at the upper levels. The Northern Hemisphere (NH) and Southern Hemisphere (SH) vortices are also compared through the analysis of their respective vortex diagnostics, and confirm that the SH vortex is less variable and lacks extreme events compared to the NH vortex. Finally extreme value theory is used to statistically mo del the vortex diagnostics and make inferences about the underlying dynamics of the polar vortices.
Resumo:
Using the recently-developed mean–variance of logarithms (MVL) diagram, together with the TIGGE archive of medium-range ensemble forecasts from nine different centres, an analysis is presented of the spatiotemporal dynamics of their perturbations, showing how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. In particular, a divide is seen between ensembles based on singular vectors or empirical orthogonal functions, and those based on bred vector, Ensemble Transform with Rescaling or Ensemble Kalman Filter techniques. Consideration is also given to the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. Finally, the use of the MVL technique to assist in selecting models for inclusion in a multi-model ensemble is discussed, and an experiment suggested to test its potential in this context.
Resumo:
The differential phase (ΦDP) measured by polarimetric radars is recognized to be a very good indicator of the path integrated by rain. Moreover, if a linear relationship is assumed between the specific differential phase (KDP) and the specific attenuation (AH) and specific differential attenuation (ADP), then attenuation can easily be corrected. The coefficients of proportionality, γH and γDP, are, however, known to be dependent in rain upon drop temperature, drop shapes, drop size distribution, and the presence of large drops causing Mie scattering. In this paper, the authors extensively apply a physically based method, often referred to as the “Smyth and Illingworth constraint,” which uses the constraint that the value of the differential reflectivity ZDR on the far side of the storm should be low to retrieve the γDP coefficient. More than 30 convective episodes observed by the French operational C-band polarimetric Trappes radar during two summers (2005 and 2006) are used to document the variability of γDP with respect to the intrinsic three-dimensional characteristics of the attenuating cells. The Smyth and Illingworth constraint could be applied to only 20% of all attenuated rays of the 2-yr dataset so it cannot be considered the unique solution for attenuation correction in an operational setting but is useful for characterizing the properties of the strongly attenuating cells. The range of variation of γDP is shown to be extremely large, with minimal, maximal, and mean values being, respectively, equal to 0.01, 0.11, and 0.025 dB °−1. Coefficient γDP appears to be almost linearly correlated with the horizontal reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) and correlation coefficient (ρHV) of the attenuating cells. The temperature effect is negligible with respect to that of the microphysical properties of the attenuating cells. Unusually large values of γDP, above 0.06 dB °−1, often referred to as “hot spots,” are reported for 15%—a nonnegligible figure—of the rays presenting a significant total differential phase shift (ΔϕDP > 30°). The corresponding strongly attenuating cells are shown to have extremely high ZDR (above 4 dB) and ZH (above 55 dBZ), very low ρHV (below 0.94), and high KDP (above 4° km−1). Analysis of 4 yr of observed raindrop spectra does not reproduce such low values of ρHV, suggesting that (wet) ice is likely to be present in the precipitation medium and responsible for the attenuation and high phase shifts. Furthermore, if melting ice is responsible for the high phase shifts, this suggests that KDP may not be uniquely related to rainfall rate but can result from the presence of wet ice. This hypothesis is supported by the analysis of the vertical profiles of horizontal reflectivity and the values of conventional probability of hail indexes.
Resumo:
We provide a unified framework for a range of linear transforms that can be used for the analysis of terahertz spectroscopic data, with particular emphasis on their application to the measurement of leaf water content. The use of linear transforms for filtering, regression, and classification is discussed. For illustration, a classification problem involving leaves at three stages of drought and a prediction problem involving simulated spectra are presented. Issues resulting from scaling the data set are discussed. Using Lagrange multipliers, we arrive at the transform that yields the maximum separation between the spectra and show that this optimal transform is equivalent to computing the Euclidean distance between the samples. The optimal linear transform is compared with the average for all the spectra as well as with the Karhunen–Loève transform to discriminate a wet leaf from a dry leaf. We show that taking several principal components into account is equivalent to defining new axes in which data are to be analyzed. The procedure shows that the coefficients of the Karhunen–Loève transform are well suited to the process of classification of spectra. This is in line with expectations, as these coefficients are built from the statistical properties of the data set analyzed.
Resumo:
A radiometric analysis of the light coupled by optical fiber amplitude modulating extrinsic-type reflectance displacement sensors is presented. Uncut fiber sensors show the largest range but a smaller responsivity. Single cut fiber sensors exhibit an improvement in responsivity at the expense of range. A further increase in responsivity as well as a reduction in the operational range is obtained when the double cut sensor configuration is implemented. The double cut configuration is particularly suitable in applications where feedback action is applied to the moving reflector surface. © 2000 American Institute of Physics.
Resumo:
Recent laboratory observations and advances in theoretical quantum chemistry allow a reappraisal of the fundamental mechanisms that determine the water vapour self-continuum absorption throughout the infrared and millimetre wave spectral regions. By starting from a framework that partitions bimolecular interactions between water molecules into free-pair states, true bound and quasi-bound dimers, we present a critical review of recent observations, continuum models and theoretical predictions. In the near-infrared bands of the water monomer, we propose that spectral features in recent laboratory-derived self-continuum can be well explained as being due to a combination of true bound and quasi-bound dimers, when the spectrum of quasi-bound dimers is approximated as being double the broadened spectrum of the water monomer. Such a representation can explain both the wavenumber variation and the temperature dependence. Recent observations of the self-continuum absorption in the windows between these near-infrared bands indicate that widely used continuum models can underestimate the true strength by around an order of magnitude. An existing far-wing model does not appear able to explain the discrepancy, and although a dimer explanation is possible, currently available observations do not allow a compelling case to be made. In the 8–12 micron window, recent observations indicate that the modern continuum models either do not properly represent the temperature dependence, the wavelength variation, or both. The temperature dependence is suggestive of a transition from the dominance of true bound dimers at lower temperatures to quasibound dimers at higher temperatures. In the mid- and far-infrared spectral region, recent theoretical calculations indicate that true bound dimers may explain at least between 20% and 40% of the observed self-continuum. The possibility that quasi-bound dimers could cause an additional contribution of the same size is discussed. Most recent theoretical considerations agree that water dimers are likely to be the dominant contributor to the self-continuum in the mm-wave spectral range.
Resumo:
Automatic keyword or keyphrase extraction is concerned with assigning keyphrases to documents based on words from within the document. Previous studies have shown that in a significant number of cases author-supplied keywords are not appropriate for the document to which they are attached. This can either be because they represent what the author believes the paper is about not what it actually is, or because they include keyphrases which are more classificatory than explanatory e.g., “University of Poppleton” instead of “Knowledge Discovery in Databases”. Thus, there is a need for a system that can generate appropriate and diverse range of keyphrases that reflect the document. This paper proposes a solution that examines the synonyms of words and phrases in the document to find the underlying themes, and presents these as appropriate keyphrases. The primary method explores taking n-grams of the source document phrases, and examining the synonyms of these, while the secondary considers grouping outputs by their synonyms. The experiments undertaken show the primary method produces good results and that the secondary method produces both good results and potential for future work.
Resumo:
We present a comparative analysis of projected impacts of climate change on river runoff from two types of distributed hydrological model, a global hydrological model (GHM) and catchment-scale hydrological models (CHM). Analyses are conducted for six catchments that are global in coverage and feature strong contrasts in spatial scale as well as climatic and development conditions. These include the Liard (Canada), Mekong (SE Asia), Okavango (SW Africa), Rio Grande (Brazil), Xiangu (China) and Harper's Brook (UK). A single GHM (Mac-PDM.09) is applied to all catchments whilst different CHMs are applied for each catchment. The CHMs typically simulate water resources impacts based on a more explicit representation of catchment water resources than that available from the GHM, and the CHMs include river routing. Simulations of average annual runoff, mean monthly runoff and high (Q5) and low (Q95) monthly runoff under baseline (1961-1990) and climate change scenarios are presented. We compare the simulated runoff response of each hydrological model to (1) prescribed increases in global mean temperature from the HadCM3 climate model and (2)a prescribed increase in global-mean temperature of 2oC for seven GCMs to explore response to climate model and structural uncertainty. We find that differences in projected changes of mean annual runoff between the two types of hydrological model can be substantial for a given GCM, and they are generally larger for indicators of high and low flow. However, they are relatively small in comparison to the range of projections across the seven GCMs. Hence, for the six catchments and seven GCMs we considered, climate model structural uncertainty is greater than the uncertainty associated with the type of hydrological model applied. Moreover, shifts in the seasonal cycle of runoff with climate change are presented similarly by both hydrological models, although for some catchments the monthly timing of high and low flows differs.This implies that for studies that seek to quantify and assess the role of climate model uncertainty on catchment-scale runoff, it may be equally as feasible to apply a GHM as it is to apply a CHM, especially when climate modelling uncertainty across the range of available GCMs is as large as it currently is. Whilst the GHM is able to represent the broad climate change signal that is represented by the CHMs, we find, however, that for some catchments there are differences between GHMs and CHMs in mean annual runoff due to differences in potential evaporation estimation methods, in the representation of the seasonality of runoff, and in the magnitude of changes in extreme monthly runoff, all of which have implications for future water management issues.
Resumo:
Single-cell analysis is essential for understanding the processes of cell differentiation and metabolic specialisation in rare cell types. The amount of single proteins in single cells can be as low as one copy per cell and is for most proteins in the attomole range or below; usually considered as insufficient for proteomic analysis. The development of modern mass spectrometers possessing increased sensitivity and mass accuracy in combination with nano-LC-MS/MS now enables the analysis of single-cell contents. In Arabidopsis thaliana, we have successfully identified nine unique proteins in a single-cell sample and 56 proteins from a pool of 15 single-cell samples from glucosinolate-rich S-cells by nanoLC-MS/MS proteomic analysis, thus establishing the proof-of-concept for true single-cell proteomic analysis. Dehydrin (ERD14_ARATH), two myrosinases (BGL37_ARATH and BGL38_ARATH), annexin (ANXD1_ARATH), vegetative storage proteins (VSP1_ARATH and VSP2_ARATH) and four proteins belonging to the S-adenosyl-l-methionine cycle (METE_ARATH, SAHH1_ARATH, METK4_ARATH and METK1/3_ARATH) with associated adenosine kinase (ADK1_ARATH), were amongst the proteins identified in these single-S-cell samples. Comparison of the functional groups of proteins identified in S-cells with epidermal/cortical cells and whole tissue provided a unique insight into the metabolism of S-cells. We conclude that S-cells are metabolically active and contain the machinery for de novo biosynthesis of methionine, a precursor for the most abundant glucosinolate glucoraphanine in these cells. Moreover, since abundant TGG2 and TGG1 peptides were consistently found in single-S-cell samples, previously shown to have high amounts of glucosinolates, we suggest that both myrosinases and glucosinolates can be localised in the same cells, but in separate subcellular compartments. The complex membrane structure of S-cells was reflected by the presence of a number of proteins involved in membrane maintenance and cellular organisation.