870 resultados para Weights and measures, Ancient.
Resumo:
This paper introduces an index of tax optimality that measures the distance of some current tax structure from the optimal tax structure in the presence of public goods. This index is defined on the [0, 1] interval and measures the proportion of the optimal tax rates that will achieve the same welfare outcome as some arbitrarily given initial tax structure. We call this number the Tax Optimality Index. We also show how the basic methodology can be altered to derive a revenue equivalent uniform tax, which measures the tax burden implied by the public sector. A numerical example is used to illustrate the method developed, and extensions of the analysis to handle models with multiple households and nonlinear taxation structures are undertaken.
Resumo:
DNA amplification using Polymerase Chain Reaction (PCR) in a small volume is used in Lab-on-a-chip systems involving DNA manipulation. For few microliters of volume of liquid, it becomes difficult to measure and monitor the thermal profile accurately and reproducibly, which is an essential requirement for successful amplification. Conventional temperature sensors are either not biocompatible or too large and hence positioned away from the liquid leading to calibration errors. In this work we present a fluorescence based detection technique that is completely biocompatible and measures directly the liquid temperature. PCR is demonstrated in a 3 ILL silicon-glass microfabricated device using non-contact induction heating whose temperature is controlled using fluorescence feedback from SYBR green I dye molecules intercalated within sensor DNA. The performance is compared with temperature feedback using a thermocouple sensor. Melting curve followed by gel electrophoresis is used to confirm product specificity after the PCR cycles. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
To mitigate the effects of climate change, countries worldwide are advancing technologies to reduce greenhouse gas emissions. This paper proposes and measures optimal production resource reallocation using data envelopment analysis. This research attempts to clarify the effect of optimal production resource reallocation on CO2 emissions reduction, focusing on regional and industrial characteristics. We use finance, energy, and CO2 emissions data from 13 industrial sectors in 39 countries from 1995 to 2009. The resulting emissions reduction potential is 2.54 Gt-CO2 in the year 2009, with former communist countries having the largest potential to reduce CO2 emissions in the manufacturing sectors. In particular, basic material industry including chemical and steel sectors has a lot of potential to reduce CO2 emissions.
Resumo:
This article presents the analysis and design of a compact multi-layer layer, high selectivity wideband bandpass filter using stub loaded and `U' shaped resonators over a slotted bottom ground plane. While the resonators with folded open circuit stub loadings create the desired bandpass characteristics. the IT shaped resonators reduce the size of filter. The slotted bottom ground plane is used to enhance the coupling to achieve the desired bandwidth. The proposed filter has been analyzed using circuit model, and the results were verified through full wave simulations and measurements. The fabricated filter is compact and measures a size of 18 mm x 25 mm x 1.6 MM. (C) 2010 Wiley Periodicals, Inc. Microwave Opt Technol Lett 52: 1387-1389, 2010: Published online in Wiley InterScience (www.interscience.wiley.com).
Resumo:
An approximate dynamic programming (ADP) based neurocontroller is developed for a heat transfer application. Heat transfer problem for a fin in a car's electronic module is modeled as a nonlinear distributed parameter (infinite-dimensional) system by taking into account heat loss and generation due to conduction, convection and radiation. A low-order, finite-dimensional lumped parameter model for this problem is obtained by using Galerkin projection and basis functions designed through the 'Proper Orthogonal Decomposition' technique (POD) and the 'snap-shot' solutions. A suboptimal neurocontroller is obtained with a single-network-adaptive-critic (SNAC). Further contribution of this paper is to develop an online robust controller to account for unmodeled dynamics and parametric uncertainties. A weight update rule is presented that guarantees boundedness of the weights and eliminates the need for persistence of excitation (PE) condition to be satisfied. Since, the ADP and neural network based controllers are of fairly general structure, they appear to have the potential to be controller synthesis tools for nonlinear distributed parameter systems especially where it is difficult to obtain an accurate model.
Resumo:
Receive antenna selection (AS) reduces the hardware complexity of multi-antenna receivers by dynamically connecting an instantaneously best antenna element to the available radio frequency (RF) chain. Due to the hardware constraints, the channels at various antenna elements have to be sounded sequentially to obtain estimates that are required for selecting the ``best'' antenna and for coherently demodulating data. Consequently, the channel state information at different antennas is outdated by different amounts. We show that, for this reason, simply selecting the antenna with the highest estimated channel gain is not optimum. Rather, the channel estimates of different antennas should be weighted differently, depending on the training scheme. We derive closed-form expressions for the symbol error probability (SEP) of AS for MPSK and MQAM in time-varying Rayleigh fading channels for arbitrary selection weights, and validate them with simulations. We then derive an explicit formula for the optimal selection weights that minimize the SEP. We find that when selection weights are not used, the SEP need not improve as the number of antenna elements increases, which is in contrast to the ideal channel estimation case. However, the optimal selection weights remedy this situation and significantly improve performance.
Resumo:
Hardware constraints, which motivate receive antenna selection, also require that various antenna elements at the receiver be sounded sequentially to obtain estimates required for selecting the `best' antenna and for coherently demodulating data thereafter. Consequently, the channel state information at different antennas is outdated by different amounts and corrupted by noise. We show that, for this reason, simply selecting the antenna with the highest estimated channel gain is not optimum. Rather, a preferable strategy is to linearly weight the channel estimates of different antennas differently, depending on the training scheme. We derive closed-form expressions for the symbol error probability (SEP) of AS for MPSK and MQAM in time-varying Rayleigh fading channels for arbitrary selection weights, and validate them with simulations. We then characterize explicitly the optimal selection weights that minimize the SEP. We also consider packet reception, in which multiple symbols of a packet are received by the same antenna. New suboptimal, but computationally efficient weighted selection schemes are proposed for reducing the packet error rate. The benefits of weighted selection are also demonstrated using a practical channel code used in third generation cellular systems. Our results show that optimal weighted selection yields a significant performance gain over conventional unweighted selection.
Resumo:
During 1990 to 2009, Foreign Direct Investment (FDI henceforth) in Finland has fluctuated greatly. This paper focused on analyzing the overall development and basic characteristics of Foreign Direct Investment in Finland, covering the period from 1990 to present. By comparing FDI in Finland with FDI in other countries, the picture of Finland’s FDI position in the world market is clearer. A lot of statistical data, tables and figures are used to describe the trend of Foreign Direct Investment in Finland. All the data used in this study were obtained from Statistics Finland, UNCTAD, OECD, World Bank and International Labor Office, Investment map website and etc. It is also found that there is a big, long-lasting and increasing imbalance of the inward FDI and outward FDI in Finland, the performance of outward FDI is stronger than the inward FDI in Finland. Finland’s position of FDI in the world is rather modest. And based on existing theories, I tried to analyze the factors that might determine the size of the inflows of FDI in Finland. The econometric model of my thesis is based on time series data ranging from 1990 to 2007. A Log linear regression model is adopted to analyze the impact of each variable. The regression results showed that Labor Cost and Investment in Education have a negative influence on the FDI inflows into Finland. Too high labor cost is the main impediment of FDI in Finland, explaining the relative small size of FDI inflows into Finland. GDP and Economy openness have a significant positive impact on the inflows of FDI into Finland; other variables do not emerge as significant factor in affecting the size of FDI inflows in Finland as expected. Meanwhile, the impacts of the most recent financial and economic crisis on FDI in the world and in Finland are discussed as well. FDI inflows worldwide and in Finland have suffered from a big setback from the 2008 global crisis. The economic crisis has undoubtedly significant negative influence on the FDI flows in the world and in Finland. Nevertheless, apart from the negative impact, the crisis itself also brings in chances for policymakers to implement more efficient policies in order to create a pro-business and pro-investment climate for the recovery of FDI inflows. . The correspondent policies and measures aiming to accelerate the recovery of the falling FDI were discussed correspondently.
Resumo:
To enhance the utilization of the wood, the sawmills are forced to place more emphasis on planning to master the whole production chain from the forest to the end product. One significant obstacle to integrating the forest-sawmill-market production chain is the lack of appropriate information about forest stands. Since the wood procurement point of view in forest planning systems has been almost totally disregarded there has been a great need to develop an easy and efficient pre-harvest measurement method, allowing separate measurement of stands prior to harvesting. The main purpose of this study was to develop a measurement method for pine stands which forest managers could use in describing the properties of the standing trees for sawing production planning. Study materials were collected from ten Scots pine stands (Pinus sylvestris) located in North Häme and South Pohjanmaa, in southern Finland. The data comprise test sawing data on 314 pine stems, dbh and height measures of all trees and measures of the quality parameters of pine sawlog stems in all ten study stands as well as the locations of all trees in six stands. The study was divided into four sub-studies which deal with pine quality prediction, construction of diameter and dead branch height distributions, sampling designs and applying height and crown height models. The final proposal for the pre-harvest measurement method is a synthesis of the individual sub-studies. Quality analysis resulted in choosing dbh, distance from stump height to the first dead branch (dead branch height), crown height and tree height as the most appropriate quality characteristics of Scots pine. Dbh and dead branch height are measured from each pine sample tree while height and crown height are derived from dbh measures by aid of mixed height and crown height models. Pine and spruce diameter distribution as well as dead branch height distribution are most effectively predicted by the kernel function. Roughly 25 sample trees seems to be appropriate in pure pine stands. In mixed stands the number of sample trees needs to be increased in proportion to the intensity of pines in order to attain the same level of accuracy.
Resumo:
An approximate dynamic programming (ADP)-based suboptimal neurocontroller to obtain desired temperature for a high-speed aerospace vehicle is synthesized in this paper. A I-D distributed parameter model of a fin is developed from basic thermal physics principles. "Snapshot" solutions of the dynamics are generated with a simple dynamic inversion-based feedback controller. Empirical basis functions are designed using the "proper orthogonal decomposition" (POD) technique and the snapshot solutions. A low-order nonlinear lumped parameter system to characterize the infinite dimensional system is obtained by carrying out a Galerkin projection. An ADP-based neurocontroller with a dual heuristic programming (DHP) formulation is obtained with a single-network-adaptive-critic (SNAC) controller for this approximate nonlinear model. Actual control in the original domain is calculated with the same POD basis functions through a reverse mapping. Further contribution of this paper includes development of an online robust neurocontroller to account for unmodeled dynamics and parametric uncertainties inherent in such a complex dynamic system. A neural network (NN) weight update rule that guarantees boundedness of the weights and relaxes the need for persistence of excitation (PE) condition is presented. Simulation studies show that in a fairly extensive but compact domain, any desired temperature profile can be achieved starting from any initial temperature profile. Therefore, the ADP and NN-based controllers appear to have the potential to become controller synthesis tools for nonlinear distributed parameter systems.
Resumo:
This paper reports on our study of the edge of the 2/5 fractional quantum Hall state, which is more complicated than the edge of the 1/3 state because of the presence of edge sectors corresponding to different partitions of composite fermions in the lowest two Lambda levels. The addition of an electron at the edge is a nonperturbative process and it is not a priori obvious in what manner the added electron distributes itself over these sectors. We show, from a microscopic calculation, that when an electron is added at the edge of the ground state in the [N(1), N(2)] sector, where N(1) and N(2) are the numbers of composite fermions in the lowest two Lambda levels, the resulting state lies in either [N(1) + 1, N(2)] or [N(1), N(2) + 1] sectors; adding an electron at the edge is thus equivalent to adding a composite fermion at the edge. The coupling to other sectors of the form [N(1) + 1 + k, N(2) - k], k integer, is negligible in the asymptotically low-energy limit. This study also allows a detailed comparison with the two-boson model of the 2/5 edge. We compute the spectral weights and find that while the individual spectral weights are complicated and nonuniversal, their sum is consistent with an effective two-boson description of the 2/5 edge.
Resumo:
Niche differentiation has been proposed as an explanation for rarity in species assemblages. To test this hypothesis requires quantifying the ecological similarity of species. This similarity can potentially be estimated by using phylogenetic relatedness. In this study, we predicted that if niche differentiation does explain the co-occurrence of rare and common species, then rare species should contribute greatly to the overall community phylogenetic diversity (PD), abundance will have phylogenetic signal, and common and rare species will be phylogenetically dissimilar. We tested these predictions by developing a novel method that integrates species rank abundance distributions with phylogenetic trees and trend analyses, to examine the relative contribution of individual species to the overall community PD. We then supplement this approach with analyses of phylogenetic signal in abundances and measures of phylogenetic similarity within and between rare and common species groups. We applied this analytical approach to 15 long-term temperate and tropical forest dynamics plots from around the world. We show that the niche differentiation hypothesis is supported in six of the nine gap-dominated forests but is rejected in the six disturbance-dominated and three gap-dominated forests. We also show that the three metrics utilized in this study each provide unique but corroborating information regarding the phylogenetic distribution of rarity in communities.
Resumo:
Ubiquitous Computing is an emerging paradigm which facilitates user to access preferred services, wherever they are, whenever they want, and the way they need, with zero administration. While moving from one place to another the user does not need to specify and configure their surrounding environment, the system initiates necessary adaptation by itself to cope up with the changing environment. In this paper we propose a system to provide context-aware ubiquitous multimedia services, without user’s intervention. We analyze the context of the user based on weights, identify the UMMS (Ubiquitous Multimedia Service) based on the collected context information and user profile, search for the optimal server to provide the required service, then adapts the service according to user’s local environment and preferences, etc. The experiment conducted several times with different context parameters, their weights and various preferences for a user. The results are quite encouraging.
Resumo:
This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.
Resumo:
Background & objectives: Pre-clinical toxicology evaluation of biotechnology products is a challenge to the toxicologist. The present investigation is an attempt to evaluate the safety profile of the first indigenously developed recombinant DNA anti-rabies vaccine DRV (100 mu g)] and combination rabies vaccine CRV (100 mu g DRV and 1.25 IU of cell culture-derived inactivated rabies virus vaccine)], which are intended for clinical use by intramuscular route in Rhesus monkeys. Methods: As per the regulatory requirements, the study was designed for acute (single dose - 14 days), sub-chronic (repeat dose - 28 days) and chronic (intended clinical dose - 120 days) toxicity tests using three dose levels, viz. therapeutic, average (2x therapeutic dose) and highest dose (10 x therapeutic dose) exposure in monkeys. The selection of the model i.e. monkey was based on affinity and rapid higher antibody response during the efficacy studies. An attempt was made to evaluate all parameters which included physical, physiological, clinical, haematological and histopathological profiles of all target organs, as well as Tiers I, II, III immunotoxicity parameters. Results: In acute toxicity there was no mortality in spite of exposing the monkeys to 10XDRV. In sub chronic and chronic toxicity studies there were no abnormalities in physical, physiological, neurological, clinical parameters, after administration of test compound in intended and 10 times of clinical dosage schedule of DRV and CRV under the experimental conditions. Clinical chemistry, haematology, organ weights and histopathology studies were essentially unremarkable except the presence of residual DNA in femtogram level at site of injection in animal which received 10X DRV in chronic toxicity study. No Observational Adverse Effects Level (NOAEL) of DRV is 1000 ug/dose (10 times of therapeutic dose) if administered on 0, 4, 7, 14, 28th day. Interpretation & conclusions: The information generated by this study not only draws attention to the need for national and international regulatory agencies in formulating guidelines for pre-clinical safety evaluation of biotech products but also facilitates the development of biopharmaceuticals as safe potential therapeutic agents.