190 resultados para DATA SET
Resumo:
Land-use changes can alter the spatial population structure of plant species, which may in turn affect the attractiveness of flower aggregations to different groups of pollinators at different spatial scales. To assess how pollinators respond to spatial heterogeneity of plant distributions and whether honeybees affect visitation by other pollinators we used an extensive data set comprising ten plant species and their flower visitors from five European countries. In particular we tested the hypothesis that the composition of the flower visitor community in terms of visitation frequencies by different pollinator groups were affected by the spatial plant population structure, viz. area and density measures, at a within-population (‘patch’) and among-population (‘population’) scale. We found that patch area and population density were the spatial variables that best explained the variation in visitation frequencies within the pollinator community. Honeybees had higher visitation frequencies in larger patches, while bumblebees and hoverflies had higher visitation frequencies in sparser populations. Solitary bees had higher visitation frequencies in sparser populations and smaller patches. We also tested the hypothesis that honeybees affect the composition of the pollinator community by altering the visitation frequencies of other groups of pollinators. There was a positive relationship between visitation frequencies of honeybees and bumblebees, while the relationship with hoverflies and solitary bees varied (positive, negative and no relationship) depending on the plant species under study. The overall conclusion is that the spatial structure of plant populations affects different groups of pollinators in contrasting ways at both the local (‘patch’) and the larger (‘population’) scales and, that honeybees affect the flower visitation by other pollinator groups in various ways, depending on the plant species under study. These contrasting responses emphasize the need to investigate the entire pollinator community when the effects of landscape change on plant–pollinator interactions are studied.
Resumo:
Covered bonds are a promising alternative for prime mortgage securitization. In this paper, we explore risk premia in the covered bond market and particularly investigate whether and how credit risk is priced. In extant literature, yield spreads between high-quality covered bonds and government bonds are often interpreted as pure liquidity premia. In contrast, we show that although liquidity is important, it is not the exclusive risk factor. Using a hand-collected data set of cover pool information, we find that the credit quality of the cover assets is an important determinant of covered bond yield spreads. This effect is particularly strong in times of financial turmoil and has a significant influence on the issuer's refinancing cost.
Resumo:
In order to validate the reported precision of space‐based atmospheric composition measurements, validation studies often focus on measurements in the tropical stratosphere, where natural variability is weak. The scatter in tropical measurements can then be used as an upper limit on single‐profile measurement precision. Here we introduce a method of quantifying the scatter of tropical measurements which aims to minimize the effects of short‐term atmospheric variability while maintaining large enough sample sizes that the results can be taken as representative of the full data set. We apply this technique to measurements of O3, HNO3, CO, H2O, NO, NO2, N2O, CH4, CCl2F2, and CCl3F produced by the Atmospheric Chemistry Experiment–Fourier Transform Spectrometer (ACE‐FTS). Tropical scatter in the ACE‐FTS retrievals is found to be consistent with the reported random errors (RREs) for H2O and CO at altitudes above 20 km, validating the RREs for these measurements. Tropical scatter in measurements of NO, NO2, CCl2F2, and CCl3F is roughly consistent with the RREs as long as the effect of outliers in the data set is reduced through the use of robust statistics. The scatter in measurements of O3, HNO3, CH4, and N2O in the stratosphere, while larger than the RREs, is shown to be consistent with the variability simulated in the Canadian Middle Atmosphere Model. This result implies that, for these species, stratospheric measurement scatter is dominated by natural variability, not random error, which provides added confidence in the scientific value of single‐profile measurements.
Resumo:
In recent years, researchers and policy makers have recognized that nontimber forest products (NTFPs) extracted from forests by rural people can make a significant contribution to their well-being and to the local economy. This study presents and discusses data that describe the contribution of NTFPs to cash income in the dry deciduous forests of Orissa and Jharkhand, India. In its focus on cash income, this study sheds light on how the sale of NTFPs and products that use NTFPs as inputs contribute to the rural economy. From analysis of a unique data set that was collected over the course of a year, the study finds that the contribution of NTFPs to cash income varies across ecological settings, seasons, income level, and caste. Such variation should inform where and when to apply NTFP forest access and management policies.
Resumo:
Interannual anomalies in vertical profiles of stratospheric ozone, in both equatorial and extratropical regions, have been shown to exhibit a strong seasonal persistence, namely, extended temporal autocorrelations during certain times of the calendar year. Here we investigate the relationship between this seasonal persistence of equatorial and extratropical ozone anomalies using the SAGE‐corrected SBUV data set, which provides a long‐term ozone profile time series. For the regions of the stratosphere where ozone is under purely dynamical or purely photochemical control, the seasonal persistence of equatorial and extratropical ozone anomalies arises from distinct mechanisms but preserves an anticorrelation between tropical and extratropical anomalies established during the winter period. In the 16–10 hPa layer, where ozone is controlled by both dynamical and photochemical processes, equatorial ozone anomalies exhibit a completely different behavior compared to ozone anomalies above and below in terms of variability, seasonal persistence, and especially the relationship between equatorial and extratropical ozone. Cross‐latitude‐time correlations show that for the 16–10 hPa layer, Northern Hemisphere (NH) extratropical ozone anomalies show the same variability as equatorial ozone anomalies but lagged by 3–6 months. High correlation coefficients are observed during the time frame of seasonal persistence of ozone anomalies, which is June– December for equatorial ozone and shifts by approximately 3–6 months when going from the equatorial region to NH extratropics. Thus in the transition zone between dynamical and photochemical control, equatorial ozone anomalies established in boreal summer/autumn are mirrored by NH extratropical ozone anomalies with a time lag similar to transport time scales. Equatorial ozone anomalies established in boreal winter/spring are likewise correlated with ozone anomalies in the Southern Hemisphere extratropics with a time lag comparable to transport time scales, similar to what is seen in the NH. However, the correlations between equatorial and SH extratropical ozone in the 10–16 hPa layer are weak.
Resumo:
Analysis of the variability of equatorial ozone profiles in the Satellite Aerosol and Gas Experiment‐corrected Solar Backscatter Ultraviolet data set demonstrates a strong seasonal persistence of interannual ozone anomalies, revealing a seasonal dependence to equatorial ozone variability. In the lower stratosphere (40–25 hPa) and in the upper stratosphere (6–4 hPa), ozone anomalies persist from approximately November until June of the following year, while ozone anomalies in the layer between 16 and 10 hPa persist from June to December. Analysis of zonal wind fields in the lower stratosphere and temperature fields in the upper stratosphere reveals a similar seasonal persistence of the zonal wind and temperature anomalies associated with the quasi‐biennial oscillation (QBO). Thus, the persistence of interannual ozone anomalies in the lower and upper equatorial stratosphere, which are mainly associated with the well‐known QBO ozone signal through the QBO-induced meridional circulation, is related to a newly identified seasonal persistence of the QBO itself. The upper stratospheric QBO ozone signal is argued to arise from a combination of QBO‐induced temperature and NOx perturbations, with the former dominating at 5 hPa and the latter at 10 hPa. Ozone anomalies in the transition zone between dynamical and photochemical control of ozone (16–10 hPa) are less influenced by the QBO signal and show a quite different seasonal persistence compared to the regions above and below.
Resumo:
A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems.
Resumo:
This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.
Resumo:
Our knowledge of stratospheric O3-N2O correlations is extended, and their potential for model-measurement comparison assessed, using data from the Atmospheric Chemistry Experiment (ACE) satellite and the Canadian Middle Atmosphere Model (CMAM). ACE provides the first comprehensive data set for the investigation of interhemispheric, interseasonal, and height-resolved differences of the O_3-N_2O correlation structure. By subsampling the CMAM data, the representativeness of the ACE data is evaluated. In the middle stratosphere, where the correlations are not compact and therefore mainly reflect the data sampling, joint probability density functions provide a detailed picture of key aspects of transport and mixing, but also trace polar ozone loss. CMAM captures these important features, but exhibits a displacement of the tropical pipe into the Southern Hemisphere (SH). Below about 21 km, the ACE data generally confirm the compactness of the correlations, although chemical ozone loss tends to destroy the compactness during late winter/spring, especially in the SH. This allows a quantitative comparison of the correlation slopes in the lower and lowermost stratosphere (LMS), which exhibit distinct seasonal cycles that reveal the different balances between diabatic descent and horizontal mixing in these two regions in the Northern Hemisphere (NH), reconciling differences found in aircraft measurements, and the strong role of chemical ozone loss in the SH. The seasonal cycles are qualitatively well reproduced by CMAM, although their amplitude is too weak in the NH LMS. The correlation slopes allow a "chemical" definition of the LMS, which is found to vary substantially in vertical extent with season.
Resumo:
A continuous tropospheric and stratospheric vertically resolved ozone time series, from 1850 to 2099, has been generated to be used as forcing in global climate models that do not include interactive chemistry. A multiple linear regression analysis of SAGE I+II satellite observations and polar ozonesonde measurements is used for the stratospheric zonal mean dataset during the well-observed period from 1979 to 2009. In addition to terms describing the mean annual cycle, the regression includes terms representing equivalent effective stratospheric chlorine (EESC) and the 11-yr solar cycle variability. The EESC regression fit coefficients, together with pre-1979 EESC values, are used to extrapolate the stratospheric ozone time series backward to 1850. While a similar procedure could be used to extrapolate into the future, coupled chemistry climate model (CCM) simulations indicate that future stratospheric ozone abundances are likely to be significantly affected by climate change, and capturing such effects through a regression model approach is not feasible. Therefore, the stratospheric ozone dataset is extended into the future (merged in 2009) with multimodel mean projections from 13 CCMs that performed a simulation until 2099 under the SRES (Special Report on Emission Scenarios) A1B greenhouse gas scenario and the A1 adjusted halogen scenario in the second round of the Chemistry-Climate Model Validation (CCMVal-2) Activity. The stratospheric zonal mean ozone time series is merged with a three-dimensional tropospheric data set extracted from simulations of the past by two CCMs (CAM3.5 and GISSPUCCINI)and of the future by one CCM (CAM3.5). The future tropospheric ozone time series continues the historical CAM3.5 simulation until 2099 following the four different Representative Concentration Pathways (RCPs). Generally good agreement is found between the historical segment of the ozone database and satellite observations, although it should be noted that total column ozone is overestimated in the southern polar latitudes during spring and tropospheric column ozone is slightly underestimated. Vertical profiles of tropospheric ozone are broadly consistent with ozonesondes and in-situ measurements, with some deviations in regions of biomass burning. The tropospheric ozone radiative forcing (RF) from the 1850s to the 2000s is 0.23Wm−2, lower than previous results. The lower value is mainly due to (i) a smaller increase in biomass burning emissions; (ii) a larger influence of stratospheric ozone depletion on upper tropospheric ozone at high southern latitudes; and possibly (iii) a larger influence of clouds (which act to reduce the net forcing) compared to previous radiative forcing calculations. Over the same period, decreases in stratospheric ozone, mainly at high latitudes, produce a RF of −0.08Wm−2, which is more negative than the central Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) value of −0.05Wm−2, but which is within the stated range of −0.15 to +0.05Wm−2. The more negative value is explained by the fact that the regression model simulates significant ozone depletion prior to 1979, in line with the increase in EESC and as confirmed by CCMs, while the AR4 assumed no change in stratospheric RF prior to 1979. A negative RF of similar magnitude persists into the future, although its location shifts from high latitudes to the tropics. This shift is due to increases in polar stratospheric ozone, but decreases in tropical lower stratospheric ozone, related to a strengthening of the Brewer-Dobson circulation, particularly through the latter half of the 21st century. Differences in trends in tropospheric ozone among the four RCPs are mainly driven by different methane concentrations, resulting in a range of tropospheric ozone RFs between 0.4 and 0.1Wm−2 by 2100. The ozone dataset described here has been released for the Coupled Model Intercomparison Project (CMIP5) model simulations in netCDF Climate and Forecast (CF) Metadata Convention at the PCMDI website (http://cmip-pcmdi.llnl.gov/).
Resumo:
Correlations between various chemical species simulated by the Canadian Middle Atmosphere Model, a general circulation model with fully interactive chemistry, are considered in order to investigate the general conditions under which compact correlations can be expected to form. At the same time, the analysis serves to validate the model. The results are compared to previous work on this subject, both from theoretical studies and from atmospheric measurements made from space and from aircraft. The results highlight the importance of having a data set with good spatial coverage when working with correlations and provide a background against which the compactness of correlations obtained from atmospheric measurements can be confirmed. It is shown that for long-lived species, distinct correlations are found in the model in the tropics, the extratropics, and the Antarctic winter vortex. Under these conditions, sparse sampling such as arises from occultation instruments is nevertheless suitable to define a chemical correlation within each region even from a single day of measurements, provided a sufficient range of mixing ratio values is sampled. In practice, this means a large vertical extent, though the requirements are less stringent at more poleward latitudes.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
We explore the influence of the choice of attenuation factor on Katz centrality indices for evolving communication networks. For given snapshots of a network observed over a period of time, recently developed communicability indices aim to identify best broadcasters and listeners in the network. In this article, we looked into the sensitivity of communicability indices on the attenuation factor constraint, in relation to spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We proposed relaxed communicability measures where the spectral radius bound on attenuation factor is relaxed and the adjacency matrix is normalised in order to maintain the convergence of the measure. Using a vitality based measure of both standard and relaxed communicability indices we looked at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We illustrated our findings with two examples of real-life networks, MIT reality mining data set of daily communications between 106 individuals during one year and UK Twitter mentions network, direct messages on Twitter between 12.4k individuals during one week.
Resumo:
Developing models to predict the effects of social and economic change on agricultural landscapes is an important challenge. Model development often involves making decisions about which aspects of the system require detailed description and which are reasonably insensitive to the assumptions. However, important components of the system are often left out because parameter estimates are unavailable. In particular, measurements of the relative influence of different objectives, such as risk, environmental management, on farmer decision making, have proven difficult to quantify. We describe a model that can make predictions of land use on the basis of profit alone or with the inclusion of explicit additional objectives. Importantly, our model is specifically designed to use parameter estimates for additional objectives obtained via farmer interviews. By statistically comparing the outputs of this model with a large farm-level land-use data set, we show that cropping patterns in the United Kingdom contain a significant contribution from farmer’s preference for objectives other than profit. In particular, we found that risk aversion had an effect on the accuracy of model predictions, whereas preference for a particular number of crops grown was less important. While nonprofit objectives have frequently been identified as factors in farmers’ decision making, our results take this analysis further by demonstrating the relationship between these preferences and actual cropping patterns.
Resumo:
Background: Since their inception, Twitter and related microblogging systems have provided a rich source of information for researchers and have attracted interest in their affordances and use. Since 2009 PubMed has included 123 journal articles on medicine and Twitter, but no overview exists as to how the field uses Twitter in research. // Objective: This paper aims to identify published work relating to Twitter indexed by PubMed, and then to classify it. This classification will provide a framework in which future researchers will be able to position their work, and to provide an understanding of the current reach of research using Twitter in medical disciplines. Limiting the study to papers indexed by PubMed ensures the work provides a reproducible benchmark. // Methods: Papers, indexed by PubMed, on Twitter and related topics were identified and reviewed. The papers were then qualitatively classified based on the paper’s title and abstract to determine their focus. The work that was Twitter focused was studied in detail to determine what data, if any, it was based on, and from this a categorization of the data set size used in the studies was developed. Using open coded content analysis additional important categories were also identified, relating to the primary methodology, domain and aspect. // Results: As of 2012, PubMed comprises more than 21 million citations from biomedical literature, and from these a corpus of 134 potentially Twitter related papers were identified, eleven of which were subsequently found not to be relevant. There were no papers prior to 2009 relating to microblogging, a term first used in 2006. Of the remaining 123 papers which mentioned Twitter, thirty were focussed on Twitter (the others referring to it tangentially). The early Twitter focussed papers introduced the topic and highlighted the potential, not carrying out any form of data analysis. The majority of published papers used analytic techniques to sort through thousands, if not millions, of individual tweets, often depending on automated tools to do so. Our analysis demonstrates that researchers are starting to use knowledge discovery methods and data mining techniques to understand vast quantities of tweets: the study of Twitter is becoming quantitative research. // Conclusions: This work is to the best of our knowledge the first overview study of medical related research based on Twitter and related microblogging. We have used five dimensions to categorise published medical related research on Twitter. This classification provides a framework within which researchers studying development and use of Twitter within medical related research, and those undertaking comparative studies of research relating to Twitter in the area of medicine and beyond, can position and ground their work.