987 resultados para Statistical efficiency
Resumo:
This paper introduces an index of tax optimality that measures the distance of some current tax structure from the optimal tax structure in the presence of public goods. This index is defined on the [0, 1] interval and measures the proportion of the optimal tax rates that will achieve the same welfare outcome as some arbitrarily given initial tax structure. We call this number the Tax Optimality Index. We also show how the basic methodology can be altered to derive a revenue equivalent uniform tax, which measures the tax burden implied by the public sector. A numerical example is used to illustrate the method developed, and extensions of the analysis to handle models with multiple households and nonlinear taxation structures are undertaken.
Resumo:
The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.
Resumo:
Three-dimensional (3D) hierarchical nanoscale architectures comprised of building blocks, with specifically engineered morphologies, are expected to play important roles in the fabrication of 'next generation' microelectronic and optoelectronic devices due to their high surface-to-volume ratio as well as opto-electronic properties. Herein, a series of well-defined 3D hierarchical rutile TiO2 architectures (HRT) were successfully prepared using a facile hydrothermal method without any surfactant or template, simply by changing the concentration of hydrochloric acid used in the synthesis. The production of these materials provides, to the best of our knowledge, the first identified example of a ledgewise growth mechanism in a rutile TiO2 structure. Also for the first time, a Dye-sensitized Solar Cell (DSC) combining a HRT is reported in conjunction with a high-extinction-coefficient metal-free organic sensitizer (D149), achieving a conversion efficiency of 5.5%, which is superior to ones employing P25 (4.5%), comparable to state-of-the-art commercial transparent titania anatase paste (5.8%). Further to this, an overall conversion efficiency 8.6% was achieved when HRT was used as the light scattering layer, a considerable improvement over the commercial transparent/reflector titania anatase paste (7.6%), a significantly smaller gap in performance than has been seen previously.
Resumo:
Targeted nanomedicines offer a strategy for greatly enhancing accumulation of a therapeutic within a specific tissue in animals. In this study, we report on the comparative targeting efficiency toward prostate-specific membrane antigen (PSMA) of a number of different ligands that are covalently attached by the same chemistry to a polymeric nanocarrier. The targeting ligands included a small molecule (glutamate urea), a peptide ligand, and a monoclonal antibody (J591). A hyperbranched polymer (HBP) was utilized as the nanocarrier and contained a fluorophore for tracking/analysis, whereas the pendant functional chain-ends provided a handle for ligand conjugation. Targeting efficiency of each ligand was assessed in vitro using flow cytometry and confocal microscopy to compare degree of binding and internalization of the HBPs by human prostate cancer (PCa) cell lines with different PSMA expression status (PC3-PIP (PSMA+) and PC3-FLU (PSMA−). The peptide ligand was further investigated in vivo, in which BALB/c nude mice bearing subcutaneous PC3-PIP and PC3-FLU PCa tumors were injected intravenously with the HBP-peptide conjugate and assessed by fluorescence imaging. Enhanced accumulation in the tumor tissue of PC3-PIP compared to PC3-FLU highlighted the applicability of this system as a future imaging and therapeutic delivery vehicle.
Resumo:
This case study has been carried out as a comparison between two different land-use strategies for climate change mitigation, with possible application within the Clean Development Mechanisms. The benefits of afforestation for carbon sequestration versus for bioenergy production are compared in the context of development planning to meet increasing domestic and agricultural demand for electricity in Hosahalli village, Karnataka, India. One option is to increase the local biomass based electricity generation, requiring an increased biomass plantation area. This option is compared with fossil based electricity generation where the area is instead used for producing wood for non-energy purposes while also sequestering carbon in the soil and standing biomass. The different options have been assessed using the PRO-COMAP model. The ranking of the different options varies depending on the system boundaries and time period. Results indicate that, in the short term (30 years) perspective, the mitigation potential of the long rotation plantation is largest, followed by the short rotation plantation delivering wood for energy. The bioenergy option is however preferred if a long-term view is taken. Short rotation forests delivering wood for short-lived non-energy products have the smallest mitigation potential, unless a large share of the wood products are used for energy purposes (replacing fossil fuels) after having served their initial purpose. If managed in a sustainable manner all of these strategies can contribute to the improvement of the social and environmental situation of the local community. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Considering the growing energy needs and concern for environmental degradation, clean and inexhaustible energy sources, e.g solar energy are receiving greater attention for various applications. The use of solar energy systems for low temperature applications reduces the burden on conventional fossil fuels and has little or no harmful effects on the environment. The performance of a solar system depends to a great extent on the collector used for the conversion of solar radiant energy to thermal energy. A solar evaporatorcollector (SEC) is basically an unglazed flat plate collector where refrigerant, like R134a, is used as the working fluid. As the operating temperature of SEC is very low, it collects energy both from solar irradiation and ambient energy leading to a much higher efficiency than the conventional collectors. The capability of SEC to utilize ambient energy also enables the system to operate at night. Therefore it is not appropriate to use for the evaluation of performance of SEC by conventional efficiency equation where ambient energy and condensation is not considered as energy input in addition to irradiation. In the National University of Singapore, several Solar Assisted Heat Pump (SAHP) systems were built for the evaluation of performance under the metrological condition of Singapore for thermal applications of desalination and SEC was the main component to harness renewable energy. In this paper, the design and performance of SEC are explored. Furthermore, an attempt is made to develop an efficiency equation for SEC and maximum efficiency attained 98% under the meteorological condition of Singapore.
Resumo:
This thesis investigates factors that impact the energy efficiency of a mining operation. An innovative mathematical framework and solution approach are developed to model, solve and analyse an open-pit coal mine. A case study in South East Queensland is investigated to validate the approach and explore the opportunities for using it to aid long, medium and short term decision makers.
Resumo:
Vampire bats, Desmodus rotundus, must maximize their feeding cycle of one blood meal per day by being efficient in the stalking and acquisition of their food. Riskin and Hermanson documented the running gait of the common vampire bat and observed they were efficient at running speeds, using longer stride lengths and thus decreased stride frequency. We obtained preliminary data on gait maintained for up to 10 minutes on a moving treadmill belt at speeds ranging from 0.23 to 0.74 m/s, which spanned a range from walking to running gaits. Bats tended to transition between gaits at about 0.40 m/s. Fourteen bats were studied and included four that were able to walk or run for 10 minutes. There was no significant change in either stride duration or frequency associated with an increase in speed. We estimated O2 consumption and CO2 production both before and 5 minutes after exercise, and found that O2 consumption increased 1 minute and 5 minutes after exercise. CO2 levels increased significantly 1 minute after exercise, but tended back towards pre-exercise level 5 minutes after exercise. Two bats were tested for blood O2, CO2 and pH levels. Interestingly, pH levels fell from 7.3 to about 7.0, indicating lactate accumulation.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.
Resumo:
"We thank MrGilder for his considered comments and suggestions for alternative analyses of our data. We also appreciate Mr Gilder’s support of our call for larger studies to contribute to the evidence base for preoperative loading with high-carbohydrate fluids..."
Resumo:
Water quality data are often collected at different sites over time to improve water quality management. Water quality data usually exhibit the following characteristics: non-normal distribution, presence of outliers, missing values, values below detection limits (censored), and serial dependence. It is essential to apply appropriate statistical methodology when analyzing water quality data to draw valid conclusions and hence provide useful advice in water management. In this chapter, we will provide and demonstrate various statistical tools for analyzing such water quality data, and will also introduce how to use a statistical software R to analyze water quality data by various statistical methods. A dataset collected from the Susquehanna River Basin will be used to demonstrate various statistical methods provided in this chapter. The dataset can be downloaded from website http://www.srbc.net/programs/CBP/nutrientprogram.htm.
Resumo:
REDEFINE is a reconfigurable SoC architecture that provides a unique platform for high performance and low power computing by exploiting the synergistic interaction between coarse grain dynamic dataflow model of computation (to expose abundant parallelism in applications) and runtime composition of efficient compute structures (on the reconfigurable computation resources). We propose and study the throttling of execution in REDEFINE to maximize the architecture efficiency. A feature specific fast hybrid (mixed level) simulation framework for early in design phase study is developed and implemented to make the huge design space exploration practical. We do performance modeling in terms of selection of important performance criteria, ranking of the explored throttling schemes and investigate effectiveness of the design space exploration using statistical hypothesis testing. We find throttling schemes which give appreciable (24.8%) overall performance gain in the architecture and 37% resource usage gain in the throttling unit simultaneously.