934 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
Resumo:
The author proves that equation, Σy n ΣZx | ΣxyZx ΣxZx ΣxZ2x | = 0, Σy ΣZx Σy2x | where Z = 10-cq and q is a numerical constant, used by Pimentel Gomes and Malavolta in several articles for the interpolation of Mitscherlih's equation y = A [ 1 - 10 - c (x + b) ] by the least squares method, always has a zero of order three for Z = 1. Therefore, equation A Zm + A1Zm -1 + ........... + Am = 0 obtained from that determinant can be divided by (Z-1)³. This property provides a good test for the correctness of the computations and facilitates the solution of the equation.
Resumo:
This paper deals with the estimation of milk production by means of weekly, biweekly, bimonthly observations and also by method known as 6-5-8, where one observation is taken at the 6th week of lactation, another at 5th month and a third one at the 8th month. The data studied were obtained from 72 lactations of the Holstein Friesian breed of the "Escola Superior de Agricultura "Luiz de Queiroz" (Piracicaba), S. Paulo, Brazil), being 6 calvings on each month of year and also 12 first calvings, 12 second calvings, and so on, up to the sixth. The authors criticize the use of "maximum error" to be found in papers dealing with this subject, and also the use of mean deviation. The former is completely supersed and unadvisable and latter, although equivalent, to a certain extent, to the usual standard deviation, has only 87,6% of its efficiency, according to KENDALL (9, pp. 130-131, 10, pp. 6-7). The data obtained were compared with the actual production, obtained by daily control and the deviations observed were studied. Their means and standard deviations are given on the table IV. Inspite of BOX's recent results (11) showing that with equal numbers in all classes a certain inequality of varinces is not important, the autors separated the methods, before carrying out the analysis of variance, thus avoiding to put together methods with too different standard deviations. We compared the three first methods, to begin with (Table VI). Then we carried out the analysis with the four first methods. (Table VII). Finally we compared the two last methods. (Table VIII). These analysis of variance compare the arithmetic means of the deviations by the methods studied, and this is equivalent to compare their biases. So we conclude tht season of calving and order of calving do not effect the biases, and the methods themselves do not differ from this view point, with the exception of method 6-5-8. Another method of attack, maybe preferrable, would be to compare the estimates of the biases with their expected mean under the null hypothesis (zero) by the t-test. We have: 1) Weekley control: t = x - 0/c(x) = 8,59 - 0/ = 1,56 2) Biweekly control: t = 11,20 - 0/6,21= 1,80 3) Monthly control: t = 7,17 - 0/9,48 = 0,76 4) Bimonthly control: t = - 4,66 - 0/17,56 = -0,26 5) Method 6-5-8 t = 144,89 - 0/22,41 = 6,46*** We denote above by three asterisks, significance the 0,1% level of probability. In this way we should conclude that the weekly, biweekly, monthly and bimonthly methods of control may be assumed to be unbiased. The 6-5-8 method is proved to be positively biased, and here the bias equals 5,9% of the mean milk production. The precision of the methods studied may be judged by their standard deviations, or by intervals covering, with a certain probability (95% for example), the deviation x corresponding to an estimate obtained by cne of the methods studied. Since the difference x - x, where x is the mean of the 72 deviations obtained for each method, has a t distribution with mean zero and estimate of standard deviation. s(x - x) = √1+ 1/72 . s = 1.007. s , and the limit of t for the 5% probability, level with 71 degrees of freedom is 1.99, then the interval to be considered is given by x ± 1.99 x 1.007 s = x ± 2.00. s The intervals thus calculated are given on the table IX.
Resumo:
Production of desirable outputs is often accompanied by undesirable by products that have damaging effects on the environment, and whose disposal is frequently regulated by public authorities. In this paper, we compute directional technology distance functions under particular assumptions concerning disposability of bads in order to test for the existence of what we call ‘complex situations’, where the biggest producer is not the greatest polluter. Furthermore, we show that how in such situations, environmental regulation could achieve an effective reduction in the aggregate level of bad outputs without reducing the production of good outputs. Finally, we illustrate our methodology with an empirical application to a sample of Spanish tile ceramic producers.
Resumo:
We show that incentive efficient allocations in economies with adverse selection and moral hazard can be determined as optimal solutions to a linear programming problem and we use duality theory to obtain a complete characterization of the optima. Our dual analysis identifies welfare effects associated with the incentives of the agents to truthfully reveal their private information. Because these welfare effects may generate non-convexities, incentive efficient allocations may involve randomization. Other properties of incentive efficient allocations are also derived.
Resumo:
It is usually assumed that the appraisal of the impacts experienced by present generations does not entail any difficulty. However, this is not true. Moreover, there is not a widely accepted methodology for taking these impacts into account. Some of the controversial issues are: the appropriate value for the discount rate, the choice of the units for expressing the impacts, physical or monetary units -income, consumption or investment- and the valuation of tangible and intangible goods. When approaching the problem of very long term impacts, there is also the problem of valuing the impacts experienced by future generations, through e.g., the use of an intergenerational discount rate. However, if this were the case, the present generation perspective would prevail, as if all the property rights on the resources were owned by them. Therefore, the sustainability requirement should also be incorporated into the analysis. We will analyze these problems in this article and show some possible solutions.
Resumo:
Conflict among member states regarding the distribution of net financial burdens has been allowed to contaminate the entire design of the EU budget with very negative consequences in terms of equity, efficiency and transparency. To get around this problem and pave the way for a substantive budget reform, we propose to decouple distributional negotiations from the rest of the budget process by linking member state net balances in a rigid manner to relative prosperity. This would be achieved through the introduction of a system of compensating horizontal transfers that would take to its logical conclusion the Commission's proposal for a generalized compensation mechanism. We discuss the impact of the proposed scheme on member states? incentives and illustrate its financial implications using revenue and expenditure projections for 2013 that are based on the current Financial Perspectives and Own Resources Decision.
Resumo:
Double immunodiffusion (DID) was used as a screening test for the diagnosis of aspergillosis. Three hundred and fifty patients were tested, all of them referred from a specialized chest disease hospital and without a definitive etiological diagnosis. When DID was positive addtional information such as clinical history and radiographic findings were requested and also surgical specimens were obtained whenever possible. Specific precipitin hamds for Aspergillus fumigatus antigen were found in 29 (8.3%) of 350 patients sera. Nineteen (65.5%) of the 29 patients with positive serology were recognized as having a fungus ball by X-rays signs in 17 or by pathological examination in 2 or by both in 8 patients. This two-year prospective study has shown that pulmonary aspergillos is a considerable problem among patiens admitted to a Chest Diseases Hospital, especially in those with pulmonary cavities or bronchiectasis.
Resumo:
Measuring productive efficiency provides information on the likely effects of regulatory reform. We present a Data Envelopment Analysis (DEA) of a sample of 38 vehicle inspection units under a concession regime, between the years 2000 and 2004. The differences in efficiency scores show the potential technical efficiency benefit of introducing some form of incentive regulation or of progressing towards liberalization. We also compute scale efficiency scores, showing that only units in territories with very low population density operate at a sub-optimal scale. Among those that operate at an optimal scale, there are significant differences in size; the largest ones operate in territories with the highest population density. This suggests that the introduction of new units in the most densely populated territories (a likely effect of some form of liberalization) would not be detrimental in terms of scale efficiency. We also find that inspection units belonging to a large, diversified firm show higher technical efficiency, reflecting economies of scale or scope at the firm level. Finally, we show that between 2002 and 2004, a period of high regulatory uncertainty in the sample’s region, technical change was almost zero. Regulatory reform should take due account of scale and diversification effects, while at the same time avoiding regulatory uncertainty.
Resumo:
In recent years there has been extensive debate in the energy economics and policy literature on the likely impacts of improvements in energy efficiency. This debate has focussed on the notion of rebound effects. Rebound effects occur when improvements in energy efficiency actually stimulate the direct and indirect demand for energy in production and/or consumption. This phenomenon occurs through the impact of the increased efficiency on the effective, or implicit, price of energy. If demand is stimulated in this way, the anticipated reduction in energy use, and the consequent environmental benefits, will be partially or possibly even more than wholly (in the case of ‘backfire’ effects) offset. A recent report published by the UK House of Lords identifies rebound effects as a plausible explanation as to why recent improvements in energy efficiency in the UK have not translated to reductions in energy demand at the macroeconomic level, but calls for empirical investigation of the factors that govern the extent of such effects. Undoubtedly the single most important conclusion of recent analysis in the UK, led by the UK Energy Research Centre (UKERC) is that the extent of rebound and backfire effects is always and everywhere an empirical issue. It is simply not possible to determine the degree of rebound and backfire from theoretical considerations alone, notwithstanding the claims of some contributors to the debate. In particular, theoretical analysis cannot rule out backfire. Nor, strictly, can theoretical considerations alone rule out the other limiting case, of zero rebound, that a narrow engineering approach would imply. In this paper we use a computable general equilibrium (CGE) framework to investigate the conditions under which rebound effects may occur in the Scottish regional and UK national economies. Previous work has suggested that rebound effects will occur even where key elasticities of substitution in production are set close to zero. Here, we carry out a systematic sensitivity analysis, where we gradually introduce relative price sensitivity into the system, focusing in particular on elasticities of substitution in production and trade parameters, in order to determine conditions under which rebound effects become a likely outcome. We find that, while there is positive pressure for rebound effects even where (direct and indirect) demand for energy is very price inelastic, this may be partially or wholly offset by negative income and disinvestment effects, which also occur in response to falling energy prices.
Resumo:
This paper uses a computable general equilibrium (CGE) framework to investigate the conditions under which rebound effects may occur in response to increases in energy efficiency in the UK national economy. Previous work for the UK has suggested that rebound effects will occur even where key elasticities of substitution in production are set close to zero. The research reported in this paper involves carrying out a systematic sensitivity analysis, where relative price sensitivity is gradually introduced into the system, focusing specifically on elasticities of substitution in production and trade parameters, in order to determine conditions under which rebound effects become a likely outcome. The main result is that, while there is positive pressure for rebound effects even where (direct and indirect) demands for energy are very price inelastic, this may be partially or wholly offset by negative income, competitiveness and disinvestment effects, which also occur in response to falling energy prices. The occurrence of disinvestment effects is of particular interest. These occur where falling energy prices reduce profitability in domestic energy supply sectors, leading to a contraction in capital stock in these sectors, which may in turn lead to rebound effects that are smaller in the long run than in the short run, a result that runs contrary to the predictions of previous theoretical work in this area.
Resumo:
We propose a nonlinear heterogeneous panel unit root test for testing the null hypothesis of unit-roots processes against the alternative that allows a proportion of units to be generated by globally stationary ESTAR processes and a remaining non-zero proportion to be generated by unit root processes. The proposed test is simple to implement and accommodates cross sectional dependence. We show that the distribution of the test statistic is free of nuisance parameters as (N, T) −! 1. Monte Carlo simulation shows that our test holds correct size and under the hypothesis that data are generated by globally stationary ESTAR processes has a better power than the recent test proposed in Pesaran [2007]. Various applications are provided.
Resumo:
We present a wage-hours contract designed to minimize costly turnover given investments in specific training combined with firm and worker information asymmetries. It may be optimal for the parties to work ‘long hours’ remunerated at premium rates for guaranteed overtime hours. Based on British plant and machine operatives, we test three predictions. First, trained workers with longer tenure are more likely to work overtime. Second, hourly overtime pay exceeds the value of marginal product while the basic hourly wage is less than the value of marginal product. Third, the basic hourly wage is negatively related to the overtime premium.
Resumo:
We consider, both theoretically and empirically, how different organization modes are aligned to govern the efficient solving of technological problems. The data set is a sample from the Chinese consumer electronics industry. Following mainly the problem solving perspective (PSP) within the knowledge based view (KBV), we develop and test several PSP and KBV hypotheses, in conjunction with competing transaction cost economics (TCE) alternatives, in an examination of the determinants of the R&D organization mode. The results show that a firm’s existing knowledge base is the single most important explanatory variable. Problem complexity and decomposability are also found to be important, consistent with the theoretical predictions of the PSP, but it is suggested that these two dimensions need to be treated as separate variables. TCE hypotheses also receive some support, but the estimation results seem more supportive of the PSP and the KBV than the TCE.
Resumo:
Background: Retrospective analyses suggest that personalized PK-based dosage might be useful for imatinib, as treatment response correlates with trough concentrations (Cmin) in cancer patients. Our objectives were to improve the interpretation of randomly measured concentrations and to confirm its efficiency before evaluating the clinical usefulness of systematic PK-based dosage in chronic myeloid leukemia patients. Methods and Results: A Bayesian method was validated for the prediction of individual Cmin on the basis of a single random observation, and was applied in a prospective multicenter randomized controlled clinical trial. 28 out of 56 patients were enrolled in the systematic dosage individualization arm and had 44 follow-up visits (their clinical follow-up is ongoing). PK-dose-adjustments were proposed in 39% having predicted Cmin significantly away from the target (1000 ng/ml). Recommendations were taken up by physicians in 57%, patients were considered non-compliant in 27%. Median Cmin at study inclusion was 754 ng/ml and differed significantly from the target (p=0.02, Wilcoxon test). On follow-up, Cmin was 984 ng/ml (p=0.82) in the compliant group. CV decreased from 46% to 27% (p=0.02, F-test). Conclusion: PK-based (Bayesian) dosage adjustment is able to bring individual drug exposure closer to a given therapeutic target. Its influence on therapeutic response remains to be evaluated.
Resumo:
Weak solutions of the spatially inhomogeneous (diffusive) Aizenmann-Bak model of coagulation-breakup within a bounded domain with homogeneous Neumann boundary conditions are shown to converge, in the fast reaction limit, towards local equilibria determined by their mass. Moreover, this mass is the solution of a nonlinear diffusion equation whose nonlinearity depends on the (size-dependent) diffusion coefficient. Initial data are assumed to have integrable zero order moment and square integrable first order moment in size, and finite entropy. In contrast to our previous result [CDF2], we are able to show the convergence without assuming uniform bounds from above and below on the number density of clusters.