20 resultados para costly taxation
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The quantification of the available energy in the environment is important because it determines photosynthesis, evapotranspiration and, therefore, the final yield of crops. Instruments for measuring the energy balance are costly and indirect estimation alternatives are desirable. This study assessed the Deardorff's model performance during a cycle of a sugarcane crop in Piracicaba, State of São Paulo, Brazil, in comparison to the aerodynamic method. This mechanistic model simulates the energy fluxes (sensible, latent heat and net radiation) at three levels (atmosphere, canopy and soil) using only air temperature, relative humidity and wind speed measured at a reference level above the canopy, crop leaf area index, and some pre-calibrated parameters (canopy albedo, soil emissivity, atmospheric transmissivity and hydrological characteristics of the soil). The analysis was made for different time scales, insolation conditions and seasons (spring, summer and autumn). Analyzing all data of 15 minute intervals, the model presented good performance for net radiation simulation in different insolations and seasons. The latent heat flux in the atmosphere and the sensible heat flux in the atmosphere did not present differences in comparison to data from the aerodynamic method during the autumn. The sensible heat flux in the soil was poorly simulated by the model due to the poor performance of the soil water balance method. The Deardorff's model improved in general the flux simulations in comparison to the aerodynamic method when more insolation was available in the environment.
Resumo:
Background: The cultivar Micro-Tom (MT) is regarded as a model system for tomato genetics due to its short life cycle and miniature size. However, efforts to improve tomato genetic transformation have led to protocols dependent on the costly hormone zeatin, combined with an excessive number of steps. Results: Here we report the development of a MT near-isogenic genotype harboring the allele Rg1 (MT-Rg1), which greatly improves tomato in vitro regeneration. Regeneration was further improved in MT by including a two-day incubation of cotyledonary explants onto medium containing 0.4 mu M 1-naphthaleneacetic acid (NAA) before cytokinin treatment. Both strategies allowed the use of 5 mu M 6-benzylaminopurine (BAP), a cytokinin 100 times less expensive than zeatin. The use of MT-Rg1 and NAA pre-incubation, followed by BAP regeneration, resulted in high transformation frequencies (near 40%), in a shorter protocol with fewer steps, spanning approximately 40 days from Agrobacterium infection to transgenic plant acclimatization. Conclusions: The genetic resource and the protocol presented here represent invaluable tools for routine gene expression manipulation and high throughput functional genomics by insertional mutagenesis in tomato.
Resumo:
The structural engineering community in Brazil faces new challenges with the recent occurrence of high intensity tornados. Satellite surveillance data shows that the area covering the south-east of Brazil, Uruguay and some of Argentina is one of the world most tornado-prone areas, second only to the infamous tornado alley in central United States. The design of structures subject to tornado winds is a typical example of decision making in the presence of uncertainty. Structural design involves finding a good balance between the competing goals of safety and economy. This paper presents a methodology to find the optimum balance between these goals in the presence of uncertainty. In this paper, reliability-based risk optimization is used to find the optimal safety coefficient that minimizes the total expected cost of a steel frame communications tower, subject to extreme storm and tornado wind loads. The technique is not new, but it is applied to a practical problem of increasing interest to Brazilian structural engineers. The problem is formulated in the partial safety factor format used in current design codes, with all additional partial factor introduced to serve as optimization variable. The expected cost of failure (or risk) is defined as the product of a. limit state exceedance probability by a limit state exceedance cost. These costs include costs of repairing, rebuilding, and paying compensation for injury and loss of life. The total expected failure cost is the sum of individual expected costs over all failure modes. The steel frame communications, tower subject of this study has become very common in Brazil due to increasing mobile phone coverage. The study shows that optimum reliability is strongly dependent on the cost (or consequences) of failure. Since failure consequences depend oil actual tower location, it turn,,; out that different optimum designs should be used in different locations. Failure consequences are also different for the different parties involved in the design, construction and operation of the tower. Hence, it is important that risk is well understood by the parties involved, so that proper contracts call be made. The investigation shows that when non-structural terms dominate design costs (e.g, in residential or office buildings) it is not too costly to over-design; this observation is in agreement with the observed practice for non-optimized structural systems. In this situation, is much easier to loose money by under-design. When by under-design. When structural material cost is a significant part of design cost (e.g. concrete dam or bridge), one is likely to lose significantmoney by over-design. In this situation, a cost-risk-benefit optimization analysis is highly recommended. Finally, the study also shows that under time-varying loads like tornados, the optimum reliability is strongly dependent on the selected design life.
Resumo:
Background: In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method: The cost-effectiveness of the Optimal (R) and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U. S. dollars. Sensitivity analysis was performed considering key model parameters. Results: In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$ 549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion: Microscopy is more cost-effective than OptiMal (R) in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
Background: Micrurus corallinus (coral snake) is a tropical forest snake belonging to the family Elapidae. Its venom shows a high neurotoxicity associated with pre- and post-synaptic toxins, causing diaphragm paralysis, which may result in death. In spite of a relatively small incidence of accidents, serum therapy is crucial for those bitten. However, the adequate production of antiserum is hampered by the difficulty in obtaining sufficient amounts of venom from a small snake with demanding breeding conditions. In order to elucidate the molecular basis of this venom and to uncover possible immunogens for an antiserum, we generated expressed sequences tags (ESTs) from its venom glands and analyzed the transcriptomic profile. In addition, their immunogenicity was tested using DNA immunization. Results: A total of 1438 ESTs were generated and grouped into 611 clusters. Toxin transcripts represented 46% of the total ESTs. The two main toxin classes consisted of three-finger toxins (3FTx) (24%) and phospholipases A(2) (PLA(2)s) (15%). However, 8 other classes of toxins were present, including C-type lectins, natriuretic peptide precursors and even high-molecular mass components such as metalloproteases and L-amino acid oxidases. Each class included an assortment of isoforms, some showing evidence of alternative splicing and domain deletions. Five antigenic candidates were selected (four 3FTx and one PLA(2)) and used for a preliminary study of DNA immunization. The immunological response showed that the sera from the immunized animals were able to recognize the recombinant antigens. Conclusion: Besides an improvement in our knowledge of the composition of coral snake venoms, which are very poorly known when compared to Old World elapids, the expression profile suggests abundant and diversified components that may be used in future antiserum formulation. As recombinant production of venom antigens frequently fails due to complex disulfide arrangements, DNA immunization may be a viable alternative. In fact, the selected candidates provided an initial evidence of the feasibility of this approach, which is less costly and not dependent on the availability of the venom.
Resumo:
We describe the effect of influenza-like illness (ILI) during the outbreak of pandemic (H1N1) 2009 on health care worker (HCW) absenteeism and compare the effectiveness and cost of 2 sick leave policies for HCWs with suspected influenza. We assessed initial 2-day sick leaves plus reassessment until the HOW was asymptomatic (2-day + reassessment policy), and initial 7-day sick leaves (7-day policy). Sick leaves peaked in August 2009: 3% of the workforce received leave for ILI. Costs during May October reached R$798,051.87 (approximate to US $443,362). The 7-day policy led to a higher monthly rate of sick leave days per 100 HCWs than did the 2-day + reassessment policy (8.72 vs. 3.47 days/100 HCWs; p<0.0001) and resulted in higher costs (US $609 vs. US $1,128 per HCW on leave). ILI affected HCW absenteeism. The 7-day policy was more costly and not more effective in preventing transmission to patients than the 2-day + reassessment policy.
Resumo:
P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.
Resumo:
Natural selection has caused prey species to evolve distinct defensive mechanisms. One of such mechanisms was the evolution of noxious or distasteful chemicals, which have appeared independently in a number of vertebrates and invertebrates. In detailed analyses of arthropod behaviour, scent gland secretions have consistently been shown to be responsible for repelling specific predators. Because using such chemicals is costly, animals with alternative cheaper defences are expected not to release such secretions when alternative options exist. In this study, we sought to determine the defensive mechanisms of the harvestman Discocyrtus invalidus, a heavy bodied species that bears a pair of repugnatorial glands. The spider Enoploctenus cyclothorax was used as the predator, and the cricket Gryllus sp. was used as a control. In a first set of experiments, the harvestmen were preyed upon significantly less than the crickets. In two other experiments, we found that harvestmen did not use their scent gland secretions to deter the predator. Moreover, results of a fourth experiment revealed that these spiders are not repelled by defensive secretions. Discocyrtus invalidus has a thick cuticle on the entire body: scanning electron micrographs revealed that only the mouth, the articulations of appendages and the tips of the legs are not covered by a hard integument. In a fifth experiment, we found that these spiders had difficulty piercing the harvestmen body. This is the first experimental evidence that a chemically defended arachnid does not use its scent gland secretions to repel a much larger predator but instead relies on its heavily built body. (c) 2010 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a new methodology to estimate unbalanced harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The problem solving algorithm herein proposed makes use of data from various power quality meters, which can either be synchronized by high technology GPS devices or by using information from a fundamental frequency load flow, what makes the overall power quality monitoring system much less costly. The ES based harmonic estimation model is applied to a 14 bus network to compare its performance to a conventional Monte Carlo approach. It is also applied to a 50 bus subtransmission network in order to compare the three-phase and single-phase approaches as well as the robustness of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a new methodology to estimate harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The main advantage in using such a technique relies upon its modeling facilities as well as its potential to solve fairly complex problems. The problem-solving algorithm herein proposed makes use of data from various power-quality (PQ) meters, which can either be synchronized by high technology global positioning system devices or by using information from a fundamental frequency load flow. This second approach makes the overall PQ monitoring system much less costly. The algorithm is applied to an IEEE test network, for which sensitivity analysis is performed to determine how the parameters of the ES can be selected so that the algorithm performs in an effective way. Case studies show fairly promising results and the robustness of the proposed method.
Resumo:
This paper presents both the theoretical and the experimental approaches of the development of a mathematical model to be used in multi-variable control system designs of an active suspension for a sport utility vehicle (SUV), in this case a light pickup truck. A complete seven-degree-of-freedom model is successfully quickly identified, with very satisfactory results in simulations and in real experiments conducted with the pickup truth. The novelty of the proposed methodology is the use of commercial software in the early stages of the identification to speed up the process and to minimize the need for a large number of costly experiments. The paper also presents major contributions to the identification of uncertainties in vehicle suspension models and in the development of identification methods using the sequential quadratic programming, where an innovation regarding the calculation of the objective function is proposed and implemented. Results from simulations of and practical experiments with the real SUV are presented, analysed, and compared, showing the potential of the method.
Resumo:
A study on the use of artificial intelligence (AI) techniques for the modelling and subsequent control of an electric resistance spot welding process (ERSW) is presented. The ERSW process is characterized by the coupling of thermal, electrical, mechanical, and metallurgical phenomena. For this reason, early attempts to model it using computational methods established as the methods of finite differences, finite element, and finite volumes, ask for simplifications that lead the model obtained far from reality or very costly in terms of computational costs, to be used in a real-time control system. In this sense, the authors have developed an ERSW controller that uses fuzzy logic to adjust the energy transferred to the weld nugget. The proposed control strategies differ in the speed with which it reaches convergence. Moreover, their application for a quality control of spot weld through artificial neural networks (ANN) is discussed.
Resumo:
Air transport has become a vital component of the global economy. However, greenhouse-gas emissions from this sector have a significant impact on global climate, being responsible for over 3.5% of all anthropogenic radiative forcing. Also, the accrued visibility of aircraft emissions greatly affects the public image of the industry. In this context, incentive-based regulations, in the form of price or quantity controls, can be envisaged as alternatives to mitigate these emissions. The use of environmental charges in air transport, and the inclusion of the sector in the European Union Emissions Trading Scheme (EU ETS), are considered under a range of scenarios. The impacts of these measures on demand are estimated, and results suggest that they are likely to be minimal-mainly due to the high willingness to pay for air transport. In particular, in the EU ETS scenario currently favoured by the EU, demand reductions are less than 2%. This may not be true in the longer run, for short trips, or if future caps become more stringent. Furthermore, given current estimates of the social Cost Of CO2 as well as typical EU ETS prices, supply-side abatement would be too costly to be encouraged by these policies in the short term. The magnitude of aviation CO2 emissions in the EU is estimated, both in physical and monetary terms; the results are consistent with Eurocontrol estimates and, for the EU-25, the total social cost of these emissions represents only 0.03% of the region`s GDP. It is concluded that the use of multisector policies, such as the EU ETS, is unsuitable for curbing emissions from air transport, and that stringent emission charges or an isolated ETS would be better instruments. However, the inclusion of aviation in the EU ETS has advantages under target-oriented post-2012 scenarios, such as policy-costs dilution, certainty in reductions, and flexibility in abatement allocation. This solution is also attractive to airlines, as it would improve their public image but require virtually no reduction of their own emissions, as they would be fully capable of passing on policy costs to their customers.
Resumo:
Paul Anthony Samuelson proposed and practiced a program for the Whig history of economics. One such example is his account of Frank Ramsey`s contribution to optimal taxation in 1927. For him, and mainly for the public finance economists who rediscovered later Ramsey`s contribution, Ramsey was a genius ahead of his time who used a mathematics too advanced for his contemporaries and was thus rediscovered only in the 1970S, when economists became more mathematically literate. In such rediscovery, a memorandum that Samuelsom wrote in 1951 for the us Treasury became central. I examine Samuelson`s account and challenge it in some respects and explore the historical context of the emergence of the optimal taxation literature in the 1970S. Additional, I analyze the canonization of Ramsey in this field, stressing Samuelson`s role in this process as a professor who liked telling stories about economists, especially about Ramsey, to his graduate students.
Resumo:
Using data from OECD economies, we show that inflation targeters suffered smaller output losses during disinflations when compared to nontargeters. We also study why some countries choose to inflation target while others do not and find that higher average inflation and smaller debt levels render the adoption of the regime more likely. Applying Heckman`s procedure to control for selection bias does not alter the link between inflation targeting and less costly disinflations.