887 resultados para level set method
Resumo:
1. Jerdon's courser Rhinoptilus bitorquatus is a nocturnally active cursorial bird that is only known to occur in a small area of scrub jungle in Andhra Pradesh, India, and is listed as critically endangered by the IUCN. Information on its habitat requirements is needed urgently to underpin conservation measures. We quantified the habitat features that correlated with the use of different areas of scrub jungle by Jerdon's coursers, and developed a model to map potentially suitable habitat over large areas from satellite imagery and facilitate the design of surveys of Jerdon's courser distribution. 2. We used 11 arrays of 5-m long tracking strips consisting of smoothed fine soil to detect the footprints of Jerdon's coursers, and measured tracking rates (tracking events per strip night). We counted the number of bushes and trees, and described other attributes of vegetation and substrate in a 10-m square plot centred on each strip. We obtained reflectance data from Landsat 7 satellite imagery for the pixel within which each strip lay. 3. We used logistic regression models to describe the relationship between tracking rate by Jerdon's coursers and characteristics of the habitat around the strips, using ground-based survey data and satellite imagery. 4. Jerdon's coursers were most likely to occur where the density of large (>2 m tall) bushes was in the range 300-700 ha(-1) and where the density of smaller bushes was less than 1000 ha(-1). This habitat was detectable using satellite imagery. 5. Synthesis and applications. The occurrence of Jerdon's courser is strongly correlated with the density of bushes and trees, and is in turn affected by grazing with domestic livestock, woodcutting and mechanical clearance of bushes to create pasture, orchards and farmland. It is likely that there is an optimal level of grazing and woodcutting that would maintain or create suitable conditions for the species. Knowledge of the species' distribution is incomplete and there is considerable pressure from human use of apparently suitable habitats. Hence, distribution mapping is a high conservation priority. A two-step procedure is proposed, involving the use of ground surveys of bush density to calibrate satellite image-based mapping of potential habitat. These maps could then be used to select priority areas for Jerdon's courser surveys. The use of tracking strips to study habitat selection and distribution has potential in studies of other scarce and secretive species.
Resumo:
In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree $\nu$) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval $[a,b]$, which only requires the discretization of $[a,b]$, we show theoretically and experimentally that the $L_2$ error in computing the acoustic field on $[a,b]$ is ${\cal O}(\log^{\nu+3/2}|k(b-a)| M^{-(\nu+1)})$, where $M$ is the number of degrees of freedom and $k$ is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems.
Resumo:
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
Resumo:
Accurate simulation of ice-sheet surface mass balance requires higher spatial resolution than is afforded by typical atmosphere-ocean general circulation models (AOGCMs), owing, in particular, to the need to resolve the narrow and steep margins where the majority of precipitation and ablation occurs. We have developed a method for calculating mass-balance changes by combining ice-sheet average time-series from AOGCM projections for future centuries, both with information from high-resolution climate models run for short periods and with a 20 km ice-sheet mass-balance model. Antarctica contributes negatively to sea level on account of increased accumulation, while Greenland contributes positively because ablation increases more rapidly. The uncertainty in the results is about 20% for Antarctica and 35% for Greenland. Changes in ice-sheet topography and dynamics are not included, but we discuss their possible effects. For an annual- and area-average warming exceeding 4.5 +/- 0.9 K in Greenland and 3.1 +/- 0.8 K in the global average, the net surface mass balance of the Greenland ice sheet becomes negative, in which case it is likely that the ice sheet would eventually be eliminated, raising global-average sea level by 7 m.
Resumo:
1. Habitat fragmentation can affect pollinator and plant population structure in terms of species composition, abundance, area covered and density of flowering plants. This, in turn, may affect pollinator visitation frequency, pollen deposition, seed set and plant fitness. 2. A reduction in the quantity of flower visits can be coupled with a reduction in the quality of pollination service and hence the plants’ overall reproductive success and long-term survival. Understanding the relationship between plant population size and⁄ or isolation and pollination limitation is of fundamental importance for plant conservation. 3. Weexamined flower visitation and seed set of 10 different plant species fromfive European countries to investigate the general effects of plant populations size and density, both within (patch level) and between populations (population level), on seed set and pollination limitation. 4. Wefound evidence that the effects of area and density of flowering plant assemblages were generally more pronounced at the patch level than at the population level. We also found that patch and population level together influenced flower visitation and seed set, and the latter increased with increasing patch area and density, but this effect was only apparent in small populations. 5. Synthesis. By using an extensive pan-European data set on flower visitation and seed set we have identified a general pattern in the interplay between the attractiveness of flowering plant patches for pollinators and density dependence of flower visitation, and also a strong plant species-specific response to habitat fragmentation effects. This can guide efforts to conserve plant–pollinator interactions, ecosystem functioning and plant fitness in fragmented habitats.
Resumo:
The skill of numerical Lagrangian drifter trajectories in three numerical models is assessed by comparing these numerically obtained paths to the trajectories of drifting buoys in the real ocean. The skill assessment is performed using the two-sample Kolmogorov–Smirnov statistical test. To demonstrate the assessment procedure, it is applied to three different models of the Agulhas region. The test can either be performed using crossing positions of one-dimensional sections in order to test model performance in specific locations, or using the total two-dimensional data set of trajectories. The test yields four quantities: a binary decision of model skill, a confidence level which can be used as a measure of goodness-of-fit of the model, a test statistic which can be used to determine the sensitivity of the confidence level, and cumulative distribution functions that aid in the qualitative analysis. The ordering of models by their confidence levels is the same as the ordering based on the qualitative analysis, which suggests that the method is suited for model validation. Only one of the three models, a 1/10° two-way nested regional ocean model, might have skill in the Agulhas region. The other two models, a 1/2° global model and a 1/8° assimilative model, might have skill only on some sections in the region
Resumo:
Parameters to be determined in a least squares refinement calculation to fit a set of observed data may sometimes usefully be `predicated' to values obtained from some independent source, such as a theoretical calculation. An algorithm for achieving this in a least squares refinement calculation is described, which leaves the operator in full control of the weight that he may wish to attach to the predicate values of the parameters.
Resumo:
The measurement of the impact of technical change has received significant attention within the economics literature. One popular method of quantifying the impact of technical change is the use of growth accounting index numbers. However, in a recent article Nelson and Pack (1999) criticise the use of such index numbers in situations where technical change is likely to be biased in favour of one or other inputs. In particular they criticise the common approach of applying observed cost shares, as proxies for partial output elasticities, to weight the change in quantities which they claim is only valid under Hicks neutrality. Recent advances in the measurement of product and factor biases of technical change developed by Balcombe et al (2000) provide a relatively straight-forward means of correcting product and factor shares in the face of biased technical progress. This paper demonstrates the correction of both revenue and cost shares used in the construction of a TFP index for UK agriculture over the period 1953 to 2000 using both revenue and cost function share equations appended with stochastic latent variables to capture the bias effect. Technical progress is shown to be biased between both individual input and output groups. Output and input quantity aggregates are then constructed using both observed and corrected share weights and the resulting TFPs are compared. There does appear to be some significant bias in TFP if the effect of biased technical progress is not taken into account when constructing the weights
Resumo:
Farming systems research is a multi-disciplinary holistic approach to solve the problems of small farms. Small and marginal farmers are the core of the Indian rural economy Constituting 0.80 of the total farming community but possessing only 0.36 of the total operational land. The declining trend of per capita land availability poses a serious challenge to the sustainability and profitability of farming. Under such conditions, it is appropriate to integrate land-based enterprises such as dairy, fishery, poultry, duckery, apiary, field and horticultural cropping within the farm, with the objective of generating adequate income and employment for these small and marginal farmers Under a set of farm constraints and varying levels of resource availability and Opportunity. The integration of different farm enterprises can be achieved with the help of a linear programming model. For the current review, integrated farming systems models were developed, by Way Of illustration, for the marginal, small, medium and large farms of eastern India using linear programming. Risk analyses were carried out for different levels of income and enterprise combinations. The fishery enterprise was shown to be less risk-prone whereas the crop enterprise involved greater risk. In general, the degree of risk increased with the increasing level of income. With increase in farm income and risk level, the resource use efficiency increased. Medium and large farms proved to be more profitable than small and marginal farms with higher level of resource use efficiency and return per Indian rupee (Rs) invested. Among the different enterprises of integrated farming systems, a chain of interaction and resource flow was observed. In order to make fanning profitable and improve resource use efficiency at the farm level, the synergy among interacting components of farming systems should be exploited. In the process of technology generation, transfer and other developmental efforts at the farm level (contrary to the discipline and commodity-based approaches which have a tendency to be piecemeal and in isolation), it is desirable to place a whole-farm scenario before the farmers to enhance their farm income, thereby motivating them towards more efficient and sustainable fanning.
Resumo:
A series of experiments was completed to investigate the impact of addition of enzymes at ensiling on in vitro rumen degradation of maize silage. Two commercial products, Depot 40 (D, Biocatalysts Ltd., Pontypridd, UK) and Liquicell 2500 (L, Specialty Enzymes and Biochemicals, Fresno, CA, USA), were used. In experiment 1, the pH optima over a pH range 4.0-6.8 and the stability of D and L under changing pH (4.0, 5.6, 6.8) and temperature (15 and 39 degreesC) conditions were determined. In experiment 2, D and L were applied at three levels to whole crop maize at ensiling, using triplicate 0.5 kg capacity laboratory minisilos. A completely randomized design with a factorial arrangement of treatments was used. One set of treatments was stored at room temperature, whereas another set was stored at 40 degreesC during the first 3 weeks of fermentation, and then stored at room temperature. Silages were opened after 120 days. Results from experiment I indicated that the xylanase activity of both products showed an optimal pH of about 5.6, but the response differed according to the enzyme, whereas the endoglucanase activity was inversely related to pH. Both products retained at least 70% of their xylanase activity after 48 h incubation at 15 or 39 degreesC. In experiment 2, enzymes reduced (P < 0.05) silage pH, regardless of storage temperature and enzyme level. Depol 40 reduced (P < 0.05) the starch contents of the silages, due to its high alpha-amylase activity. This effect was more noticeable in the silages stored at room temperature. Addition of L reduced (P < 0.05) neutral detergent fiber (NDF) and acid detergent fiber (ADF) contents. In vitro rumen degradation, assessed using the Reading Pressure Technique (RPT), showed that L increased (P < 0.05) the initial 6 h gas production (GP) and organic matter degradability (OMD), but did not affect (P > 0.05) the final extent of OMD, indicating that this preparation acted on the rumen degradable material. In contrast, silages treated with D had reduced (P < 0.05) rates of gas production and OMD. These enzymes, regardless of ensiling temperature, can be effective in improving the nutritive quality of maize silage when applied at ensiling. However, the biochemical properties of enzymes (i.e., enzymic activities, optimum pH) may have a crucial role in dictating the nature of the responses. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This review considers microbial inocula used in in vitro systems from the perspective of their ability to degrade or ferment a particular substrate, rather than the microbial species that it contains. By necessity, this required an examination of bacterial, protozoal and fungal populations of the rumen and hindgut with respect to factors influencing their activity. The potential to manipulate these populations through diet or sampling time are examined, as is inoculum preparation and level. The main alternatives to fresh rumen fluid (i.e., caecal digesta or faeces) are discussed with respect to end-point degradabilities and fermentation dynamics. Although the potential to use rumen contents obtained from donor animals at slaughter offers possibilities, the requirement to store it and its subsequent loss of activity are limitations. Statistical modelling of data, although still requiring a deal of developmental work, may offer an alternative approach. Finally, with respect to the range of in vitro methodologies and equipment employed, it is suggested that a degree of uniformity could be obtained through generation of a set of guidelines relating to the host animal, sampling technique and inoculum preparation. It was considered unlikely that any particular system would be accepted as the 'standard' procedure. However, before any protocol can be adopted, additional data are required (e.g., a method to assess inoculum 'quality' with respect to its fermentative and/or degradative activity), preparation/inoculation techniques need to be refined and a methodology to store inocula without loss of efficacy developed. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
An aggregated farm-level index, the Agri-environmental Footprint Index (AFI), based on multiple criteria methods and representing a harmonised approach to evaluation of EU agri-environmental schemes is described. The index uses a common framework for the design and evaluation of policy that can be customised to locally relevant agri-environmental issues and circumstances. Evaluation can be strictly policy-focused, or broader and more holistic in that context-relevant assessment criteria that are not necessarily considered in the evaluated policy can nevertheless be incorporated. The Index structure is flexible, and can respond to diverse local needs. The process of Index construction is interactive, engaging farmers and other relevant stakeholders in a transparent decision-making process that can ensure acceptance of the outcome, help to forge an improved understanding of local agri-environmental priorities and potentially increase awareness of the critical role of farmers in environmental management. The structure of the AFI facilitates post-evaluation analysis of relative performance in different dimensions of the agri-environment, permitting identification of current strengths and weaknesses, and enabling future improvement in policy design. Quantification of the environmental impact of agriculture beyond the stated aims of policy using an 'unweighted' form of the AFI has potential as the basis of an ongoing system of environmental audit within a specified agricultural context. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Nonregular two-level fractional factorial designs are designs which cannot be specified in terms of a set of defining contrasts. The aliasing properties of nonregular designs can be compared by using a generalisation of the minimum aberration criterion called minimum G2-aberration.Until now, the only nontrivial designs that are known to have minimum G2-aberration are designs for n runs and m n–5 factors. In this paper, a number of construction results are presented which allow minimum G2-aberration designs to be found for many of the cases with n = 16, 24, 32, 48, 64 and 96 runs and m n/2–2 factors.
Resumo:
To explore the projection efficiency of a design, Tsai, et al [2000. Projective three-level main effects designs robust to model uncertainty. Biometrika 87, 467-475] introduced the Q criterion to compare three-level main-effects designs for quantitative factors that allow the consideration of interactions in addition to main effects. In this paper, we extend their method and focus on the case in which experimenters have some prior knowledge, in advance of running the experiment, about the probabilities of effects being non-negligible. A criterion which incorporates experimenters' prior beliefs about the importance of each effect is introduced to compare orthogonal, or nearly orthogonal, main effects designs with robustness to interactions as a secondary consideration. We show that this criterion, exploiting prior information about model uncertainty, can lead to more appropriate designs reflecting experimenters' prior beliefs. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
It is generally acknowledged that population-level assessments provide,I better measure of response to toxicants than assessments of individual-level effects. population-level assessments generally require the use of models to integrate potentially complex data about the effects of toxicants on life-history traits, and to provide a relevant measure of ecological impact. Building on excellent earlier reviews we here briefly outline the modelling options in population-level risk assessment. Modelling is used to calculate population endpoints from available data, which is often about Individual life histories, the ways that individuals interact with each other, the environment and other species, and the ways individuals are affected by pesticides. As population endpoints, we recommend the use of population abundance, population growth rate, and the chance of population persistence. We recommend two types of model: simple life-history models distinguishing two life-history stages, juveniles and adults; and spatially-explicit individual-based landscape models. Life-history models are very quick to set up and run, and they provide a great deal or insight. At the other extreme, individual-based landscape models provide the greatest verisimilitude, albeit at the cost of greatly increased complexity. We conclude with a discussion of the cations of the severe problems of parameterising models.