973 resultados para pollen threshold values
Resumo:
The measured angular differential cross section (DCS) for the elastic scattering of electrons from Ar+(3s2 3p5 2P) at the collision energy of 16 eV is presented. By solving the Hartree-Fock equations, we calculate the corresponding theoretical DCS including the coupling between the orbital angular momenta and spin of the incident electron and those of the target ion and also relaxation effects. Since the collision energy is above one inelastic threshold for the transition 3s2 3p5 2P–3s 3p6 2S, we consider the effects on the DCS of inelastic absorption processes and elastic resonances. The measurements deviate significantly from the Rutherford cross section over the full angular range observed, especially in the region of a deep minimum centered at approximately 75°. Our theory and an uncoupled, unrelaxed method using a local, spherically symmetric potential by Manson [Phys. Rev. 182, 97 (1969)] both reproduce the overall shape of the measured DCS, although the coupled Hartree-Fock approach describes the depth of the minimum more accurately. The minimum is shallower in the present theory owing to our lower average value for the d-wave non-Coulomb phase shift s2, which is due to the high sensitivity of s2 to the different scattering potentials used in the two models. The present measurements and calculations therefore show the importance of including coupling and relaxation effects when accurately modeling electron-ion collisions. The phase shifts obtained by fitting to the measurements are compared with the values of Manson and the present method.
Resumo:
Aim Determination of the main directions of variance in an extensive data base of annual pollen deposition, and the relationship between pollen data from modified Tauber traps and palaeoecological data. Location Northern Finland and Norway. Methods Pollen analysis of annual samples from pollen traps and contiguous high-resolution samples from a peat sequence. Numerical analysis (principal components analysis) of the resulting data. Results The main direction of variation in the trap data is due to the vegetation region in which each trap is located. A secondary direction of variation is due to the annual variability of pollen production of some of the tree taxa, especially Betula and Pinus. This annual variability is more conspicuous in ‘absolute’ data than it is in percentage data which, at this annual resolution, becomes more random. There are systematic differences, with respect to peat-forming taxa, between pollen data from traps and pollen data from a peat profile collected over the same period of time. Main conclusions Annual variability in pollen production is rarely visible in fossil pollen samples because these cannot be sampled at precisely a 12-month resolution. At near-annual resolution sampling, it results in erratic percentage values which do not reflect changes in vegetation. Profiles sampled at near annual resolution are better analysed in terms of pollen accumulation rates with the realization that even these do not record changes in plant abundance but changes in pollen abundance. However, at the coarser temporal resolution common in most fossil samples it does not mask the origin of the pollen in terms of its vegetation region. Climate change may not be recognizable from pollen assemblages until the change has persisted in the same direction sufficiently long enough to alter the flowering (pollen production) pattern of the dominant trees.
Resumo:
PURPOSE: To evaluate the sensitivity and specificity of the screening mode of the Humphrey-Welch Allyn frequency-doubling technology (FDT), Octopus tendency-oriented perimetry (TOP), and the Humphrey Swedish Interactive Threshold Algorithm (SITA)-fast (HSF) in patients with glaucoma. DESIGN: A comparative consecutive case series. METHODS: This was a prospective study which took place in the glaucoma unit of an academic department of ophthalmology. One eye of 70 consecutive glaucoma patients and 28 age-matched normal subjects was studied. Eyes were examined with the program C-20 of FDT, G1-TOP, and 24-2 HSF in one visit and in random order. The gold standard for glaucoma was presence of a typical glaucomatous optic disk appearance on stereoscopic examination, which was judged by a glaucoma expert. The sensitivity and specificity, positive and negative predictive value, and receiver operating characteristic (ROC) curves of two algorithms for the FDT screening test, two algorithms for TOP, and three algorithms for HSF, as defined before the start of this study, were evaluated. The time required for each test was also analyzed. RESULTS: Values for area under the ROC curve ranged from 82.5%-93.9%. The largest area (93.9%) under the ROC curve was obtained with the FDT criteria, defining abnormality as presence of at least one abnormal location. Mean test time was 1.08 ± 0.28 minutes, 2.31 ± 0.28 minutes, and 4.14 ± 0.57 minutes for the FDT, TOP, and HSF, respectively. The difference in testing time was statistically significant (P <.0001). CONCLUSIONS: The C-20 FDT, G1-TOP, and 24-2 HSF appear to be useful tools to diagnose glaucoma. The test C-20 FDT and G1-TOP take approximately 1/4 and 1/2 of the time taken by 24 to 2 HSF. © 2002 by Elsevier Science Inc. All rights reserved.
Resumo:
We employ the time-dependent R-matrix (TDRM) method to calculate anisotropy parameters for positive and negative sidebands of selected harmonics generated by two-color two-photon above-threshold ionization of argon. We consider odd harmonics of an 800-nm field ranging from the 13th to 19th harmonic, overlapped by a fundamental 800-nm IR field. The anisotropy parameters obtained using the TDRM method are compared with those obtained using a second-order perturbation theory with a model potential approach and a soft photon approximation approach. Where available, a comparison is also made to published experimental results. All three theoretical approaches provide similar values for anisotropy parameters. The TDRM approach obtains values that are closest to published experimental values. At high photon energies, the differences between each of the theoretical methods become less significant.
Resumo:
Two of the most frequently used methods of pollen counting on slides from Hirst type traps are evaluated in this paper: the transverse traverse method and the longitudinal traverse method. The study was carried out during June–July 1996 and 1997 on slides from a trap at Worcester, UK. Three pollen types were selected for this purpose: Poaceae, Urticaceae and Quercus. The statistical results show that the daily concentrations followed similar trends (p < 0.01, R-values between 0.78–0.96) with both methods during the two years, although the counts were slightly higher using the longitudinal traverses method. Significant differences were observed, however, when the distribution of the concentrations during 24 hour sampling periods was considered. For more detailed analysis, the daily counts obtained with both methods were correlated with the total number of pollen grains for the taxon over the whole slide, in two different situations: high and low concentrations of pollen in the atmosphere. In the case of high concentrations, the counts for all three taxa with both methods are significantly correlated with the total pollen count. In the samples with low concentrations, the Poaceae and Urticaceae counts with both methods are significantly correlated with the total counts, but none of Quercus counts are. Consideration of the results indicates that both methods give a reasonable approximation to the count derived from the slide as a whole. More studies need be done to explore the comparability of counting methods in order to work towards a Universal Methodology in Aeropalynology.
Resumo:
Background Very few studies on human exposure to allergenic pollen have been conducted using direct methods, with background concentrations measured at city center monitoring stations typically taken as a proxy for exposure despite the inhomogeneous nature of atmospheric pollen concentrations. A 2003 World Health Organization report highlighted the need for an improved understanding of the relation between monitoring station data and actual exposure. Objective To investigate the relation between grass pollen dose and background concentrations measured at a monitoring station, to assess the fidelity of monitoring station data as a qualitative proxy for dose, and to evaluate the ratio of dose rate to background concentration. Methods Grass pollen dose data were collected in Aarhus, Denmark, in an area where grass pollen sources were prevalent, using Nasal Air Samplers. Sample collection lasted for approximately 25 to 30 minutes and was performed at 2-hour intervals from noon to midevening under moderate exercise by 2 individuals. Results A median ratio of dose rate to background concentration of 0.018 was recorded, with higher ratio values frequently occurring at 12 to 2 pm, the time of day when grass species likely to be present in the area are expected to flower. From 4 to 8 pm, dose rate and background concentration data were found to be strongly and significantly correlated (rs = 0.81). Averaged dose rate and background concentration data showed opposing temporal trends. Conclusion Where local emissions are not a factor, background concentration data constitute a good quantitative proxy for inhaled dose. The present ratio of dose rate to background concentration may aid the study of dose–response relations.
Resumo:
A pulsed Nd-YAG laser beam is used to produce a transient refractive index gradient in air adjoining the plane surface of the sample material. This refractive index gradient is probed by a continuous He-Ne laser beam propagating parallel to the sample surface. The observed deflection signals produced by the probe beam exhibit drastic variations when the pump laser energy density crosses the damage threshold for the sample. The measurements are used to estimate the damage threshold for a few polymer samples. The present values are found to be in good agreement with those determined by other methods.
Resumo:
The acoustic signals generated in solids due to interaction with pulsed laser beam is used to determine the ablation threshold of bulk polymer samples of teflon (polytetrafluoroethylene) and nylon under the irradiation from a Q-switched Nd:YAG laser at 1.06µm wavelength. A suitably designed piezoelectric transducer is employed for the detection of photoacoustic (PA) signals generated in this process. It has been observed that an abrupt increase in the amplitude of the PA signal occurs at the ablation threshold. Also there exist distinct values for the threshold corresponding to different mechanisms operative in producing damages like surface morphology, bond breaking and melting processes at different laser energy densities.
Resumo:
Biclustering is simultaneous clustering of both rows and columns of a data matrix. A measure called Mean Squared Residue (MSR) is used to simultaneously evaluate the coherence of rows and columns within a submatrix. In this paper a novel algorithm is developed for biclustering gene expression data using the newly introduced concept of MSR difference threshold. In the first step high quality bicluster seeds are generated using K-Means clustering algorithm. Then more genes and conditions (node) are added to the bicluster. Before adding a node the MSR X of the bicluster is calculated. After adding the node again the MSR Y is calculated. The added node is deleted if Y minus X is greater than MSR difference threshold or if Y is greater than MSR threshold which depends on the dataset. The MSR difference threshold is different for gene list and condition list and it depends on the dataset also. Proper values should be identified through experimentation in order to obtain biclusters of high quality. The results obtained on bench mark dataset clearly indicate that this algorithm is better than many of the existing biclustering algorithms
Resumo:
The photoionization cross sections for the production of the Kr II 4s state and Kr II satellite states were studied in the 4s ionization threshold region. The interference of direct photoionization and ionization through the autoionization decay of doubly-excited states was considered. In the calculations of doubly-excited state energies, performed by a configuration interaction technique, the 4p spin-orbit interaction and the (Kr II core)-(excited electron) Coulomb interaction were included. The theoretical cross sections are in many cases in good agreement with the measured values. Strong resonant features in the satellite spectra with threshold energies greater than 30 eV are predicted.
Resumo:
Globally there have been a number of concerns about the development of genetically modified crops many of which relate to the implications of gene flow at various levels. In Europe these concerns have led the European Union (EU) to promote the concept of 'coexistence' to allow the freedom to plant conventional and genetically modified (GM) varieties but to minimise the presence of transgenic material within conventional crops. Should a premium for non-GM varieties emerge on the market, the presence of transgenes would generate a 'negative externality' to conventional growers. The establishment of maximum tolerance level for the adventitious presence of GM material in conventional crops produces a threshold effect in the external costs. The existing literature suggests that apart from the biological characteristics of the plant under consideration (e.g. self-pollination rates, entomophilous species, anemophilous species, etc.), gene flow at the landscape level is affected by the relative size of the source and sink populations and the spatial arrangement of the fields in the landscape. In this paper, we take genetically modified herbicide tolerant oilseed rape (GM HT OSR) as a model crop. Starting from an individual pollen dispersal function, we develop a spatially explicit numerical model in order to assess the effect of the size of the source/sink populations and the degree of spatial aggregation on the extent of gene flow into conventional OSR varieties under two alternative settings. We find that when the transgene presence in conventional produce is detected at the field level, the external cost will increase with the size of the source area and with the level of spatial disaggregation. on the other hand when the transgene presence is averaged among all conventional fields in the landscape (e.g. because of grain mixing before detection), the external cost will only depend on the relative size of the source area. The model could readily be incorporated into an economic evaluation of policies to regulate adoption of GM HT OSR. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The use of semiochemicals for manipulation of the pollen beetle Meligethes aeneus (Fabricius) (Coleoptera: Nitidulidae) is being investigated for potential incorporation into a push-pull control strategy for this pest, which damages oilseed rape, Brassica napus L. (Brassicaceae), throughout Europe. The response of M. aeneus to non-host plant volatiles was investigated in laboratory assays to establish whether they have any effect on host plant location behaviour. Two approaches were used. First a novel, moving-air bioassay using air funnels was developed to compare the response of M. aeneus to several non-host plant essential oils. The beetles avoided the host plant flowers in the presence of non-host volatiles, suggesting that M. aeneus uses olfactory cues in host location and/or acceptance. The results were expressed as 'repellency values' in order to compare the effects of the different oils tested. Lavender (Lavendula angustifolia Miller) (Lamiaceae) essential oil gave the highest repellency value. In addition, a four-arm olfactometer was used to investigate olfactory responses, as this technique eliminated the influence of host plant visual and contact cues. The attraction to host plant volatiles was reduced by the addition of non-host plant volatiles, but in addition to masking the host plant volatiles, the non-host volatiles were avoided when these were presented alone. This is encouraging for the potential use of non-host plants within a push-pull strategy to reduce the pest colonisation of crops. Further testing in more realistic semi-field and field trials is underway.
Resumo:
Area-wide development viability appraisals are undertaken to determine the economic feasibility of policy targets in relation to planning obligations. Essentially, development viability appraisals consist of a series of residual valuations of hypothetical development sites across a local authority area at a particular point in time. The valuations incorporate the estimated financial implications of the proposed level of planning obligations. To determine viability the output land values are benchmarked against threshold land value and therefore the basis on which this threshold is established and the level at which it is set is critical to development viability appraisal at the policy-setting (area-wide) level. Essentially it is an estimate of the value at which a landowner would be prepared to sell. If the estimated site values are higher than the threshold land value the policy target is considered viable. This paper investigates the effectiveness of existing methods of determining threshold land value. They will be tested against the relationship between development value and costs. Modelling reveals that threshold land value that is not related to shifts in development value renders marginal sites unviable and fails to collect proportionate planning obligations from high value/low cost sites. Testing the model against national average house prices and build costs reveals the high degree of volatility in residual land values over time and underlines the importance of making threshold land value relative to the main driver of this volatility, namely development value.
Resumo:
We test the expectations theory of the term structure of U.S. interest rates in nonlinear systems. These models allow the response of the change in short rates to past values of the spread to depend upon the level of the spread. The nonlinear system is tested against a linear system, and the results of testing the expectations theory in both models are contrasted. We find that the results of tests of the implications of the expectations theory depend on the size and sign of the spread. The long maturity spread predicts future changes of the short rate only when it is high.
Resumo:
14C-dated pollen and lake-level data from Europe are used to assess the spatial patterns of climate change between 6000 yr BP and present, as simulated by the NCAR CCM1 (National Center for Atmospheric Research, Community Climate Model, version 1) in response to the change in the Earth’s orbital parameters during this perod. First, reconstructed 6000 yr BP values of bioclimate variables obtained from pollen and lake-level data with the constrained-analogue technique are compared with simulated values. Then a 6000 yr BP biome map obtained from pollen data with an objective biome reconstruction (biomization) technique is compared with BIOME model results derived from the same simulation. Data and simulations agree in some features: warmer-than-present growing seasons in N and C Europe allowed forests to extend further north and to higher elevations than today, and warmer winters in C and E Europe prevented boreal conifers from spreading west. More generally, however, the agreement is poor. Predominantly deciduous forest types in Fennoscandia imply warmer winters than the model allows. The model fails to simulate winters cold enough, or summers wet enough, to allow temperate deciduous forests their former extended distribution in S Europe, and it incorrectly simulates a much expanded area of steppe vegetation in SE Europe. Similar errors have also been noted in numerous 6000 yr BP simulations with prescribed modern sea surface temperatures. These errors are evidently not resolved by the inclusion of interactive sea-surface conditions in the CCM1. Accurate representation of mid-Holocene climates in Europe may require the inclusion of dynamical ocean–atmosphere and/or vegetation–atmosphere interactions that most palaeoclimate model simulations have so far disregarded.