111 resultados para Distance sampling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper takes as its starting point recent work on caring for distant others which is one expression of renewed interest in moral geographies. It examines relationships in aid chains connecting donors/carers in the First World or North and recipients/cared for in the Third World or South. Assuming predominance of relationships between strangers and of universalism as a basis for moral motivation I draw upon Gift Theory in order to characterize two basic forms of gift relationship. The first is purely altruistic, the other fully reciprocal and obligatory within the framework of institutions, values and social forces within specific relationships of politics and power. This conception problematizes donor-recipient relationships in the context of two modernist models of aid chains-the Resource Transfer and the Beyond Aid Paradigms. In the first, donor domination means low levels of reciprocity despite rhetoric about partnership and participation. The second identifies potential for greater reciprocity on the basis of combination between social movements and non-governmental organizations at both national and trans-national levels, although at the risk of marginalizing competencies of states. Finally, I evaluate post-structural critiques which also problematize aid chain relationships. They do so both in terms of bases-such as universals and difference-upon which it might be constructed and the means-such as forms of positionality and mutuality-by which it might be achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use the third perihelion pass by the Ulysses spacecraft to illustrate and investigate the “flux excess” effect, whereby open solar flux estimates from spacecraft increase with increasing heliocentric distance. We analyze the potential effects of small-scale structure in the heliospheric field (giving fluctuations in the radial component on timescales smaller than 1 h) and kinematic time-of-flight effects of longitudinal structure in the solar wind flow. We show that the flux excess is explained by neither very small-scale structure (timescales < 1 h) nor by the kinematic “bunching effect” on spacecraft sampling. The observed flux excesses is, however, well explained by the kinematic effect of larger-scale (>1 day) solar wind speed variations on the frozen-in heliospheric field. We show that averaging over an interval T (that is long enough to eliminate structure originating in the heliosphere yet small enough to avoid cancelling opposite polarity radial field that originates from genuine sector structure in the coronal source field) is only an approximately valid way of allowing for these effects and does not adequately explain or account for differences between the streamer belt and the polar coronal holes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the “flux excess” effect, whereby open solar flux estimates from spacecraft increase with increasing heliocentric distance. We analyze the kinematic effect on these open solar flux estimates of large-scale longitudinal structure in the solar wind flow, with particular emphasis on correcting estimates made using data from near-Earth satellites. We show that scatter, but no net bias, is introduced by the kinematic “bunching effect” on sampling and that this is true for both compression and rarefaction regions. The observed flux excesses, as a function of heliocentric distance, are shown to be consistent with open solar flux estimates from solar magnetograms made using the potential field source surface method and are well explained by the kinematic effect of solar wind speed variations on the frozen-in heliospheric field. Applying this kinematic correction to the Omni-2 interplanetary data set shows that the open solar flux at solar minimum fell from an annual mean of 3.82 × 1016 Wb in 1987 to close to half that value (1.98 × 1016 Wb) in 2007, making the fall in the minimum value over the last two solar cycles considerably faster than the rise inferred from geomagnetic activity observations over four solar cycles in the first half of the 20th century.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spatial distribution of aerosol chemical composition and the evolution of the Organic Aerosol (OA) fraction is investigated based upon airborne measurements of aerosol chemical composition in the planetary boundary layer across Europe. Sub-micron aerosol chemical composition was measured using a compact Time-of-Flight Aerosol Mass Spectrometer (cToF-AMS). A range of sampling conditions were evaluated, including relatively clean background conditions, polluted conditions in North-Western Europe and the near-field to far-field outflow from such conditions. Ammonium nitrate and OA were found to be the dominant chemical components of the sub-micron aerosol burden, with mass fractions ranging from 20--50% each. Ammonium nitrate was found to dominate in North-Western Europe during episodes of high pollution, reflecting the enhanced NO_x and ammonia sources in this region. OA was ubiquitous across Europe and concentrations generally exceeded sulphate by 30--160%. A factor analysis of the OA burden was performed in order to probe the evolution across this large range of spatial and temporal scales. Two separate Oxygenated Organic Aerosol (OOA) components were identified; one representing an aged-OOA, termed Low Volatility-OOA and another representing fresher-OOA, termed Semi Volatile-OOA on the basis of their mass spectral similarity to previous studies. The factors derived from different flights were not chemically the same but rather reflect the range of OA composition sampled during a particular flight. Significant chemical processing of the OA was observed downwind of major sources in North-Western Europe, with the LV-OOA component becoming increasingly dominant as the distance from source and photochemical processing increased. The measurements suggest that the aging of OA can be viewed as a continuum, with a progression from a less oxidised, semi-volatile component to a highly oxidised, less-volatile component. Substantial amounts of pollution were observed far downwind of continental Europe, with OA and ammonium nitrate being the major constituents of the sub-micron aerosol burden. Such anthropogenically perturbed air masses can significantly perturb regional climate far downwind of major source regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rationalizing non-participation as a resource deficiency in the household, this paper identifies strategies for milk-market development in the Ethiopian highlands. The additional amounts of covariates required for Positive marketable surplus -'distances-to market'-are computed from a model in which production and sales are correlated; sales are left-censored at some Unobserved thresholds production efficiencies are heterogeneous: and the data are in the form of a panel. Incorporating these features into the modeling exercise ant because they are fundamental to the data-generating environment. There are four reasons. First, because production and sales decisions are enacted within the same household, both decisions are affected by the same exogenous shocks, and production and sales are therefore likely to be correlated. Second. because selling, involves time and time is arguably the most important resource available to a subsistence household, the minimum Sales amount is not zero but, rather, some unobserved threshold that lies beyond zero. Third. the Potential existence of heterogeneous abilities in management, ones that lie latent from the econometrician's perspective, suggest that production efficiencies should be permitted to vary across households. Fourth, we observe a single set of households during multiple visits in a single production year. The results convey clearly that institutional and production) innovations alone are insufficient to encourage participation. Market-precipitating innovation requires complementary inputs, especially improvements in human capital and reductions in risk. Copyright (c) 20 08 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To improve the welfare of the rural poor and keep them in the countryside, the government of Botswana has been spending 40% of the value of agricultural GDP on agricultural support services. But can investment make smallholder agriculture prosperous in such adverse conditions? This paper derives an answer by applying a two-output six-input stochastic translog distance function, with inefficiency effects and biased technical change to panel data for the 18 districts and the commercial agricultural sector, from 1979 to 1996 This model demonstrates that herds are the most important input, followed by draft power. land and seeds. Multilateral indices for technical change, technical efficiency and total factor productivity (TFP) show that the technology level of the commercial agricultural sector is more than six times that of traditional agriculture and that the gap has been increasing, due to technological regression in traditional agriculture and modest progress in commercial agriculture. Since the levels of efficiency are similar, the same patient is repeated by the TFP indices. This result highlights the policy dilemma of the trade-off between efficiency and equity objectives.