886 resultados para index method
Resumo:
Productivity growth is conventionally measured by indices representing discreet approximations of the Divisia TFP index under the assumption that technological change is Hicks-neutral. When this assumption is violated, these indices are no longer meaningful because they conflate the effects of factor accumulation and technological change. We propose a way of adjusting the conventional TFP index that solves this problem. The method adopts a latent variable approach to the measurement of technical change biases that provides a simple means of correcting product and factor shares in the standard Tornqvist-Theil TFP index. An application to UK agriculture over the period 1953-2000 demonstrates that technical progress is strongly biased. The implications of that bias for productivity measurement are shown to be very large, with the conventional TFP index severely underestimating productivity growth. The result is explained primarily by the fact that technological change has favoured the rapidly accumulating factors against labour, the factor leaving the sector. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
In this article, we investigate the commonly used autoregressive filter method of adjusting appraisal-based real estate returns to correct for the perceived biases induced in the appraisal process. Many articles have been written on appraisal smoothing but remarkably few have considered the relationship between smoothing at the individual property level and the amount of persistence in the aggregate appraisal-based index. To investigate this issue we analyze a large sample of appraisal data at the individual property level from the Investment Property Databank. We find that commonly used unsmoothing estimates at the index level overstate the extent of smoothing that takes place at the individual property level. There is also strong support for an ARFIMA representation of appraisal returns at the index level and an ARMA model at the individual property level.
Resumo:
Unless the benefits to society of measures to protect and improve the welfare of animals are made transparent by means of their valuation they are likely to go unrecognised and cannot easily be weighed against the costs of such measures as required, for example, by policy-makers. A simple single measure scoring system, based on the Welfare Quality® index, is used, together with a choice experiment economic valuation method, to estimate the value that people place on improvements to the welfare of different farm animal species measured on a continuous (0-100) scale. Results from using the method on a survey sample of some 300 people show that it is able to elicit apparently credible values. The survey found that 96% of respondents thought that we have a moral obligation to safeguard the welfare of animals and that over 72% were concerned about the way farm animals are treated. Estimated mean annual willingness to pay for meat from animals with improved welfare of just one point on the scale was £5.24 for beef cattle, £4.57 for pigs and £5.10 for meat chickens. Further development of the method is required to capture the total economic value of animal welfare benefits. Despite this, the method is considered a practical means for obtaining economic values that can be used in the cost-benefit appraisal of policy measures intended to improve the welfare of animals.
Resumo:
We present an approach for dealing with coarse-resolution Earth observations (EO) in terrestrial ecosystem data assimilation schemes. The use of coarse-scale observations in ecological data assimilation schemes is complicated by spatial heterogeneity and nonlinear processes in natural ecosystems. If these complications are not appropriately dealt with, then the data assimilation will produce biased results. The “disaggregation” approach that we describe in this paper combines frequent coarse-resolution observations with temporally sparse fine-resolution measurements. We demonstrate the approach using a demonstration data set based on measurements of an Arctic ecosystem. In this example, normalized difference vegetation index observations are assimilated into a “zero-order” model of leaf area index and carbon uptake. The disaggregation approach conserves key ecosystem characteristics regardless of the observation resolution and estimates the carbon uptake to within 1% of the demonstration data set “truth.” Assimilating the same data in the normal manner, but without the disaggregation approach, results in carbon uptake being underestimated by 58% at an observation resolution of 250 m. The disaggregation method allows the combination of multiresolution EO and improves in spatial resolution if observations are located on a grid that shifts from one observation time to the next. Additionally, the approach is not tied to a particular data assimilation scheme, model, or EO product and can cope with complex observation distributions, as it makes no implicit assumptions of normality.
Resumo:
This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.
Resumo:
In this paper, we develop a method, termed the Interaction Distribution (ID) method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2), qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.
Resumo:
Background: The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. New method: We propose a complete pipeline for the cluster analysis of ERP data. To increase the signalto-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA)to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). Results: After validating the pipeline on simulated data, we tested it on data from two experiments – a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership.
Resumo:
This study has compared preliminary estimates of effective leaf area index (LAI) derived from fish-eye lens photographs to those estimated from airborne full-waveform small-footprint LiDAR data for a forest dataset in Australia. The full-waveform data was decomposed and optimized using a trust-region-reflective algorithm to extract denser point clouds. LAI LiDAR estimates were derived in two ways (1) from the probability of discrete pulses reaching the ground without being intercepted (point method) and (2) from raw waveform canopy height profile processing adapted to small-footprint laser altimetry (waveform method) accounting for reflectance ratio between vegetation and ground. The best results, that matched hemispherical photography estimates, were achieved for the waveform method with a study area-adjusted reflectance ratio of 0.4 (RMSE of 0.15 and 0.03 at plot and site level, respectively). The point method generally overestimated, whereas the waveform method with an arbitrary reflectance ratio of 0.5 underestimated the fish-eye lens LAI estimates.
Resumo:
In numerical weather prediction, parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known; and the parameterisations themselves are also approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation (DA), such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential DA methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to pre-determined functional forms of missing physics or parameterisations that are based upon prior information. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. Furthermore, it is shown how the method depends on the quality of the DA results. The results indicate that this new method is a powerful tool in systematic model improvement.
Resumo:
1. The rapid expansion of systematic monitoring schemes necessitates robust methods to reliably assess species' status and trends. Insect monitoring poses a challenge where there are strong seasonal patterns, requiring repeated counts to reliably assess abundance. Butterfly monitoring schemes (BMSs) operate in an increasing number of countries with broadly the same methodology, yet they differ in their observation frequency and in the methods used to compute annual abundance indices. 2. Using simulated and observed data, we performed an extensive comparison of two approaches used to derive abundance indices from count data collected via BMS, under a range of sampling frequencies. Linear interpolation is most commonly used to estimate abundance indices from seasonal count series. A second method, hereafter the regional generalized additive model (GAM), fits a GAM to repeated counts within sites across a climatic region. For the two methods, we estimated bias in abundance indices and the statistical power for detecting trends, given different proportions of missing counts. We also compared the accuracy of trend estimates using systematically degraded observed counts of the Gatekeeper Pyronia tithonus (Linnaeus 1767). 3. The regional GAM method generally outperforms the linear interpolation method. When the proportion of missing counts increased beyond 50%, indices derived via the linear interpolation method showed substantially higher estimation error as well as clear biases, in comparison to the regional GAM method. The regional GAM method also showed higher power to detect trends when the proportion of missing counts was substantial. 4. Synthesis and applications. Monitoring offers invaluable data to support conservation policy and management, but requires robust analysis approaches and guidance for new and expanding schemes. Based on our findings, we recommend the regional generalized additive model approach when conducting integrative analyses across schemes, or when analysing scheme data with reduced sampling efforts. This method enables existing schemes to be expanded or new schemes to be developed with reduced within-year sampling frequency, as well as affording options to adapt protocols to more efficiently assess species status and trends across large geographical scales.
Resumo:
Lack of access to insurance exacerbates the impact of climate variability on smallholder famers in Africa. Unlike traditional insurance, which compensates proven agricultural losses, weather index insurance (WII) pays out in the event that a weather index is breached. In principle, WII could be provided to farmers throughout Africa. There are two data-related hurdles to this. First, most farmers do not live close enough to a rain gauge with sufficiently long record of observations. Second, mismatches between weather indices and yield may expose farmers to uncompensated losses, and insurers to unfair payouts – a phenomenon known as basis risk. In essence, basis risk results from complexities in the progression from meteorological drought (rainfall deficit) to agricultural drought (low soil moisture). In this study, we use a land-surface model to describe the transition from meteorological to agricultural drought. We demonstrate that spatial and temporal aggregation of rainfall results in a clearer link with soil moisture, and hence a reduction in basis risk. We then use an advanced statistical method to show how optimal aggregation of satellite-based rainfall estimates can reduce basis risk, enabling remotely sensed data to be utilized robustly for WII.
Resumo:
Objective To assess dietary quality and associated factors in adolescents. Study design We conducted a population-based cross-sectional study in a sample of 1584 adolescents living in areas of the state of Sao Paulo, Brazil. Dietary intake was measured with the 24-hour recall method, and dietary quality was assessed by means of the Health Eating Index (HEI), adapted to fit to the local requirements. Linear regression analyses were performed to assess the association between the HEI and demographic, socioeconomic, and lifestyle variables. Results A total of 97.1% of the adolescents studied had an inadequate diet or a diet that needed improvement. The mean overall HEI score was 59.7. Lower mean HEI scores were found for fruits, dairy products, and vegetables. Male adolescents who were physically active and lived in a house or apartment had higher HEI scores. The multiple regression analyses showed that the quality of the diet improved as age decreased. Adolescents who lived in houses or apartments had higher HEI scores than adolescents living in shacks or slums, regardless of age and energy intake. Conclusions Dietary quality is associated with income and age. A better understanding of the factors associated can provide input to the formulation of policies and development of nutritional actions. (J Pediatr 2010; 156:456-60).
An improved estimate of leaf area index based on the histogram analysis of hemispherical photographs
Resumo:
Leaf area index (LAI) is a key parameter that affects the surface fluxes of energy, mass, and momentum over vegetated lands, but observational measurements are scarce, especially in remote areas with complex canopy structure. In this paper we present an indirect method to calculate the LAI based on the analyses of histograms of hemispherical photographs. The optimal threshold value (OTV), the gray-level required to separate the background (sky) and the foreground (leaves), was analytically calculated using the entropy crossover method (Sahoo, P.K., Slaaf, D.W., Albert, T.A., 1997. Threshold selection using a minimal histogram entropy difference. Optical Engineering 36(7) 1976-1981). The OTV was used to calculate the LAI using the well-known gap fraction method. This methodology was tested in two different ecosystems, including Amazon forest and pasturelands in Brazil. In general, the error between observed and calculated LAI was similar to 6%. The methodology presented is suitable for the calculation of LAI since it is responsive to sky conditions, automatic, easy to implement, faster than commercially available software, and requires less data storage. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Modular product architectures have generated numerous benefits for companies in terms of cost, lead-time and quality. The defined interfaces and the module’s properties decrease the effort to develop new product variants, and provide an opportunity to perform parallel tasks in design, manufacturing and assembly. The background of this thesis is that companies perform verifications (tests, inspections and controls) of products late, when most of the parts have been assembled. This extends the lead-time to delivery and ruins benefits from a modular product architecture; specifically when the verifications are extensive and the frequency of detected defects is high. Due to the number of product variants obtained from the modular product architecture, verifications must handle a wide range of equipment, instructions and goal values to ensure that high quality products can be delivered. As a result, the total benefits from a modular product architecture are difficult to achieve. This thesis describes a method for planning and performing verifications within a modular product architecture. The method supports companies by utilizing the defined modules for verifications already at module level, so called MPV (Module Property Verification). With MPV, defects are detected at an earlier point, compared to verification of a complete product, and the number of verifications is decreased. The MPV method is built up of three phases. In Phase A, candidate modules are evaluated on the basis of costs and lead-time of the verifications and the repair of defects. An MPV-index is obtained which quantifies the module and indicates if the module should be verified at product level or by MPV. In Phase B, the interface interaction between the modules is evaluated, as well as the distribution of properties among the modules. The purpose is to evaluate the extent to which supplementary verifications at product level is needed. Phase C supports a selection of the final verification strategy. The cost and lead-time for the supplementary verifications are considered together with the results from Phase A and B. The MPV method is based on a set of qualitative and quantitative measures and tools which provide an overview and support the achievement of cost and time efficient company specific verifications. A practical application in industry shows how the MPV method can be used, and the subsequent benefits