917 resultados para Sampling schemes
Resumo:
Imputation is commonly used to compensate for item non-response in sample surveys. If we treat the imputed values as if they are true values, and then compute the variance estimates by using standard methods, such as the jackknife, we can seriously underestimate the true variances. We propose a modified jackknife variance estimator which is defined for any without-replacement unequal probability sampling design in the presence of imputation and non-negligible sampling fraction. Mean, ratio and random-imputation methods will be considered. The practical advantage of the method proposed is its breadth of applicability.
Resumo:
Phylogenetic methods hold great promise for the reconstruction of the transition from precursor to modern flora and the identification of underlying factors which drive the process. The phylogenetic methods presently used to address the question of the origin of the Cape flora of South Africa are considered here. The sampling requirements of each of these methods, which include dating of diversifications using calibrated molecular trees, sister pair comparisons, lineage through time plots and biogeographical optimizations are reviewed. Sampling of genes, genomes and species are considered. Although increased higher-level studies and increased sampling are required for robust interpretation, it is clear that much progress is already made. It is argued that despite the remarkable richness of the flora, the Cape flora is a valuable model system to demonstrate the utility of phylogenetic methods in determining the history of a modern flora.
Resumo:
A means of assessing, monitoring and controlling aggregate emissions from multi-instrument Emissions Trading Schemes is proposed. The approach allows contributions from different instruments with different forms of emissions targets to be integrated. Where Emissions Trading Schemes are helping meet specific national targets, the approach allows the entry requirements of new participants to be calculated and set at a level that will achieve these targets. The approach is multi-levelled, and may be extended downwards to support pooling of participants within instruments, or upwards to embed Emissions Trading Schemes within a wider suite of policies and measures with hard and soft targets. Aggregate emissions from each instrument are treated stochastically. Emissions from the scheme as a whole are then the joint probability distribution formed by integrating the emissions from its instruments. Because a Bayesian approach is adopted, qualitative and semi-qualitative data from expert opinion can be used where quantitative data is not currently available, or is incomplete. This approach helps government retain sufficient control over emissions trading scheme targets to allow them to meet their emissions reduction obligations, while minimising the need for retrospectively adjusting existing participants’ conditions of entry. This maintains participant confidence, while providing the necessary policy levers for good governance.
Resumo:
Taipei City has put a significant effort toward the implementation of green design and green building schemes towards a sustainable eco-city. Although some of the environmental indicators have not indicated significant progress in environmental improvement, implementing the two schemes has obtained considerable results; therefore, the two schemes are on the right path towards promoting a sustainable eco-city. However, it has to be admitted that the two schemes are a rather “technocratic” set of solutions and eco-centric approach. It is suggested that not only the public sector but also the private sector need to put more effort toward implement the schemes, and the government needs to encourage the private sector to adopt the schemes in practice.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
In models of complicated physical-chemical processes operator splitting is very often applied in order to achieve sufficient accuracy as well as efficiency of the numerical solution. The recently rediscovered weighted splitting schemes have the great advantage of being parallelizable on operator level, which allows us to reduce the computational time if parallel computers are used. In this paper, the computational times needed for the weighted splitting methods are studied in comparison with the sequential (S) splitting and the Marchuk-Strang (MSt) splitting and are illustrated by numerical experiments performed by use of simplified versions of the Danish Eulerian model (DEM).
Resumo:
High spatial resolution vertical profiles of pore-water chemistry have been obtained for a peatland using diffusive equilibrium in thin films (DET) gel probes. Comparison of DET pore-water data with more traditional depth-specific sampling shows good agreement and the DET profiling method is less invasive and less likely to induce mixing of pore-waters. Chloride mass balances as water tables fell in the early summer indicate that evaporative concentration dominates and there is negligible lateral flow in the peat. Lack of lateral flow allows element budgets for the same site at different times to be compared. The high spatial resolution of sampling also enables gradients to be observed that permit calculations of vertical fluxes. Sulfate concentrations fall at two sites with net rates of 1.5 and 5.0nmol cm− 3 day− 1, likely due to a dominance of bacterial sulfate reduction, while a third site showed a net gain in sulfate due to oxidation of sulfur over the study period at an average rate of 3.4nmol cm− 3 day− 1. Behaviour of iron is closely coupled to that of sulfur; there is net removal of iron at the two sites where sulfate reduction dominates and addition of iron where oxidation dominates. The profiles demonstrate that, in addition to strong vertical redox related chemical changes, there is significant spatial heterogeneity. Whilst overall there is evidence for net reduction of sulfate within the peatland pore-waters, this can be reversed, at least temporarily, during periods of drought when sulfide oxidation with resulting acid production predominates.
Resumo:
A second order accurate, characteristic-based, finite difference scheme is developed for scalar conservation laws with source terms. The scheme is an extension of well-known second order scalar schemes for homogeneous conservation laws. Such schemes have proved immensely powerful when applied to homogeneous systems of conservation laws using flux-difference splitting. Many application areas, however, involve inhomogeneous systems of conservation laws with source terms, and the scheme presented here is applied to such systems in a subsequent paper.
Resumo:
The evaluation of EU policy in the area of rural land use management often encounters problems of multiple and poorly articulated objectives. Agri-environmental policy has a range of aims, including natural resource protection, biodiversity conservation and the protection and enhancement of landscape quality. Forestry policy, in addition to production and environmental objectives, increasingly has social aims, including enhancement of human health and wellbeing, lifelong learning, and the cultural and amenity value of the landscape. Many of these aims are intangible, making them hard to define and quantify. This article describes two approaches for dealing with such situations, both of which rely on substantial participation by stakeholders. The first is the Agri-Environment Footprint Index, a form of multi-criteria participatory approach. The other, applied here to forestry, has been the development of ‘multi-purpose’ approaches to evaluation, which respond to the diverse needs of stakeholders through the use of mixed methods and a broad suite of indicators, selected through a participatory process. Each makes use of case studies and involves stakeholders in the evaluation process, thereby enhancing their commitment to the programmes and increasing their sustainability. Both also demonstrate more ‘holistic’ approaches to evaluation than the formal methods prescribed in the EU Common Monitoring and Evaluation Framework.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
In financial decision-making, a number of mathematical models have been developed for financial management in construction. However, optimizing both qualitative and quantitative factors and the semi-structured nature of construction finance optimization problems are key challenges in solving construction finance decisions. The selection of funding schemes by a modified construction loan acquisition model is solved by an adaptive genetic algorithm (AGA) approach. The basic objectives of the model are to optimize the loan and to minimize the interest payments for all projects. Multiple projects being undertaken by a medium-size construction firm in Hong Kong were used as a real case study to demonstrate the application of the model to the borrowing decision problems. A compromise monthly borrowing schedule was finally achieved. The results indicate that Small and Medium Enterprise (SME) Loan Guarantee Scheme (SGS) was first identified as the source of external financing. Selection of sources of funding can then be made to avoid the possibility of financial problems in the firm by classifying qualitative factors into external, interactive and internal types and taking additional qualitative factors including sovereignty, credit ability and networking into consideration. Thus a more accurate, objective and reliable borrowing decision can be provided for the decision-maker to analyse the financial options.