974 resultados para Discount rate
Resumo:
The position of housing demand and supply is not consistent. The Australian situation counters the experience demonstrated in many other parts of the world in the aftermath of the Global Financial Crisis, with residential housing prices proving particularly resilient. A seemingly inexorable housing demand remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of population growth fuelled by immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand level ensures problems related to housing affordability continue almost unabated. A significant, but less visible factor impacting housing affordability relates to holding costs. Although only one contributor in the housing affordability matrix, the nature and extent of holding cost impact requires elucidation: for example, the computation and methodology behind the calculation of holding costs varies widely - and in some instances completely ignored. In addition, ambiguity exists in terms of the inclusion of various elements that comprise holding costs, thereby affecting the assessment of their relative contribution. Such anomalies may be explained by considering that assessment is conducted over time in an ever-changing environment. A strong relationship with opportunity cost - in turn dependant inter alia upon prevailing inflation and / or interest rates - adds further complexity. By extending research in the general area of housing affordability, this thesis seeks to provide a detailed investigation of those elements related to holding costs specifically in the context of midsized (i.e. between 15-200 lots) greenfield residential property developments in South East Queensland. With the dimensions of holding costs and their influence over housing affordability determined, the null hypothesis H0 that holding costs are not passed on can be addressed. Arriving at these conclusions involves the development of robust economic and econometric models which seek to clarify the componentry impacts of holding cost elements. An explanatory sequential design research methodology has been adopted, whereby the compilation and analysis of quantitative data and the development of an economic model is informed by the subsequent collection and analysis of primarily qualitative data derived from surveying development related organisations. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.
Resumo:
This paper examines the case of a procurement auction for a single project, in which the breakdown of the winning bid into its component items determines the value of payments subsequently made to bidder as the work progresses. Unbalanced bidding, or bid skewing, involves the uneven distribution of mark-up among the component items in such a way as to attempt to derive increased benefit to the unbalancer but without involving any change in the total bid. One form of unbalanced bidding for example, termed Front Loading (FL), is thought to be widespread in practice. This involves overpricing the work items that occur early in the project and underpricing the work items that occur later in the project in order to enhance the bidder's cash flow. Naturally, auctioners attempt to protect themselves from the effects of unbalancing—typically reserving the right to reject a bid that has been detected as unbalanced. As a result, models have been developed to both unbalance bids and detect unbalanced bids but virtually nothing is known of their use, success or otherwise. This is of particular concern for the detection methods as, without testing, there is no way of knowing the extent to which unbalanced bids are remaining undetected or balanced bids are being falsely detected as unbalanced. This paper reports on a simulation study aimed at demonstrating the likely effects of unbalanced bid detection models in a deterministic environment involving FL unbalancing in a Texas DOT detection setting, in which bids are deemed to be unbalanced if an item exceeds a maximum (or fails to reach a minimum) ‘cut-off’ value determined by the Texas method. A proportion of bids are automatically and maximally unbalanced over a long series of simulated contract projects and the profits and detection rates of both the balancers and unbalancers are compared. The results show that, as expected, the balanced bids are often incorrectly detected as unbalanced, with the rate of (mis)detection increasing with the proportion of FL bidders in the auction. It is also shown that, while the profit for balanced bidders remains the same irrespective of the number of FL bidders involved, the FL bidder's profit increases with the greater proportion of FL bidders present in the auction. Sensitivity tests show the results to be generally robust, with (mis)detection rates increasing further when there are fewer bidders in the auction and when more data are averaged to determine the baseline value, but being smaller or larger with increased cut-off values and increased cost and estimate variability depending on the number of FL bidders involved. The FL bidder's expected benefit from unbalancing, on the other hand, increases, when there are fewer bidders in the auction. It also increases when the cut-off rate and discount rate is increased, when there is less variability in the costs and their estimates, and when less data are used in setting the baseline values.
Resumo:
his study presents an improved method of dealing with embedded tax liabilities in portfolio choice. We argue that using a risk-free discount rate is appropriate for calculating the present value of future tax liabilities. Supportive of recent research, our results found a taxation-induced preference of holding equities over bonds, and a location preference of holding equities in the taxable account and bonds in retirement accounts. These important findings contrast with traditional investment advice which suggests a greater capacity for risk in retirement accounts.
Resumo:
This article presents new theoretical and empirical evidence on the forecasting ability of prediction markets. We develop a model that predicts that the time until expiration of a prediction market should negatively affect the accuracy of prices as a forecasting tool in the direction of a ‘favourite/longshot bias’. That is, high-likelihood events are underpriced, and low-likelihood events are over-priced. We confirm this result using a large data set of prediction market transaction prices. Prediction markets are reasonably well calibrated when time to expiration is relatively short, but prices are significantly biased for events farther in the future. When time value of money is considered, the miscalibration can be exploited to earn excess returns only when the trader has a relatively low discount rate.
Resumo:
Suppose two treatments with binary responses are available for patients with some disease and that each patient will receive one of the two treatments. In this paper we consider the interests of patients both within and outside a trial using a Bayesian bandit approach and conclude that equal allocation is not appropriate for either group of patients. It is suggested that Gittins indices should be used (using an approach called dynamic discounting by choosing the discount rate based on the number of future patients in the trial) if the disease is rare, and the least failures rule if the disease is common. Some analytical and simulation results are provided.
Resumo:
The purpose of the study was to analyse factors affecting the differences in land prices between regions. The key issue was to find out the policy effects on farmland prices. In addition to comprehensive literature review, a theoretical analysis as well as modern panel and spatial econometric techniques were utilized. The study clearly pointed out the importance of taking into account the possible spatial dependence. The data were exceptionally large, comprising more than 6 000 observations. Thus, it allowed a thorough econometric estimation including the possibility to take into account the spatial nature of the data. This study supports the view that there are many other factors that affect farmland prices besides pure agricultural returns. It was also found that the support clearly affects land prices. However, rather than assuming the discount rates for support and market returns to be similar, the rough analysis refers to the discount rate for support being a little lower. If this were true it would indicate that farmers rely more on support income than market returns. The results support the view presented in literature that land values are more responsive to government payments when these payments are perceived to be permanent. An important result of this study is that the structural differences between regions and the structural change in agriculture seemed to have a considerable role in affecting land prices. Firstly, the present structure affects the competition in the land market: the more dense farms are in the region the more there are potential buyers, and the land price increases. Secondly, the change in farm structure (especially in animal husbandry) connected to the policy changes that increase area-based support affects land prices. The effect comes from two sources. Growing farms need more land for the manure, and the proportion of retiring farmers may be lower. The introduction of the manure density variable proved to be an efficient way to aggregate the otherwise very difficult task of taking into account the environmental pressure caused by structural change in animal husbandry. Finally, infrastructure also has a very important role in determining the price level of agricultural land. If other industries are prospering in the surrounding area, agricultural viability also seems to improve. The non-farm opportunities offered to farm families make continuing and developing farming more tempting.
Resumo:
Phosphorus is a nutrient needed in crop production. While boosting crop yields it may also accelerate eutrophication in the surface waters receiving the phosphorus runoff. The privately optimal level of phosphorus use is determined by the input and output prices, and the crop response to phosphorus. Socially optimal use also takes into account the impact of phosphorus runoff on water quality. Increased eutrophication decreases the economic value of surface waters by Deteriorating fish stocks, curtailing the potential for recreational activities and by increasing the probabilities of mass algae blooms. In this dissertation, the optimal use of phosphorus is modelled as a dynamic optimization problem. The potentially plant available phosphorus accumulated in soil is treated as a dynamic state variable, the control variable being the annual phosphorus fertilization. For crop response to phosphorus, the state variable is more important than the annual fertilization. The level of this state variable is also a key determinant of the runoff of dissolved, reactive phosphorus. Also the loss of particulate phosphorus due to erosion is considered in the thesis, as well as its mitigation by constructing vegetative buffers. The dynamic model is applied for crop production on clay soils. At the steady state, the analysis focuses on the effects of prices, damage parameterization, discount rate and soil phosphorus carryover capacity on optimal steady state phosphorus use. The economic instruments needed to sustain the social optimum are also analyzed. According to the results the economic incentives should be conditioned on soil phosphorus values directly, rather than on annual phosphorus applications. The results also emphasize the substantial effects the differences in varying discount rates of the farmer and the social planner have on optimal instruments. The thesis analyzes the optimal soil phosphorus paths from its alternative initial levels. It also examines how erosion susceptibility of a parcel affects these optimal paths. The results underline the significance of the prevailing soil phosphorus status on optimal fertilization levels. With very high initial soil phosphorus levels, both the privately and socially optimal phosphorus application levels are close to zero as the state variable is driven towards its steady state. The soil phosphorus processes are slow. Therefore, depleting high phosphorus soils may take decades. The thesis also presents a methodologically interesting phenomenon in problems of maximizing the flow of discounted payoffs. When both the benefits and damages are related to the same state variable, the steady state solution may have an interesting property, under very general conditions: The tail of the payoffs of the privately optimal path as well as the steady state may provide a higher social welfare than the respective tail of the socially optimal path. The result is formalized and an applied to the created framework of optimal phosphorus use.
Resumo:
Submergence of land is a major impact of large hydropower projects. Such projects are often also dogged by siltation, delays in construction and heavy debt burdens-factors that are not considered in the project planning exercise. A simple constrained optimization model for the benefit~ost analysis of large hydropower projects that considers these features is proposed. The model is then applied to two sites in India. Using the potential productivity of an energy plantation on the submergible land is suggested as a reasonable approach to estimating the opportunity cost of submergence. Optimum project dimensions are calculated for various scenarios. Results indicate that the inclusion of submergence cost may lead to a substanual reduction in net present value and hence in project viability. Parameters such as project lifespan, con$truction time, discount rate and external debt burden are also of significance. The designs proposed by the planners are found to be uneconomic, whIle even the optimal design may not be viable for more typical scenarios. The concept of energy opportunity cost is useful for preliminary screening; some projects may require more detailed calculations. The optimization approach helps identify significant trade-offs between energy generation and land availability.
Resumo:
The purpose of this article is to characterize dynamic optimal harvesting trajectories that maximize discounted utility assuming an age-structured population model, in the same line as Tahvonen (2009). The main novelty of our study is that uses as an age-structured population model the standard stochastic cohort framework applied in Virtual Population Analysis for fish stock assessment. This allows us to compare optimal harvesting in a discounted economic context with standard reference points used by fisheries agencies for long term management plans (e.g. Fmsy). Our main findings are the following. First, optimal steady state is characterized and sufficient conditions that guarantees its existence and uniqueness for the general case of n cohorts are shown. It is also proved that the optimal steady state coincides with the traditional target Fmsy when the utility function to be maximized is the yield and the discount rate is zero. Second, an algorithm to calculate the optimal path that easily drives the resource to the steady state is developed. And third, the algorithm is applied to the Northern Stock of hake. Results show that management plans based exclusively on traditional reference targets as Fmsy may drive fishery economic results far from the optimal.
Resumo:
4 p.
Resumo:
4 p.
Resumo:
29 p.
Resumo:
25 p.
Resumo:
No mundo, as hepatites decorrentes de infecções virais têm sido uma das grandes preocupações em saúde pública devido a seu caráter crônico, curso assintomático e pela sua capacidade de determinar a perda da função hepática. Com o uso em larga escala de medicamentos antirretrovirais, a doença hepática relacionada à infecção pelo vírus da hepatite C (VHC) contribuiu para uma mudança radical na história natural da infecção pelo vírus da imunodeficiência humana (HIV). Não se sabe ao certo o peso da coinfecção VHC/HIV no Brasil, mas evidências apontam que independentemente da região geográfica, esses indivíduos apresentam maiores dificuldades em eliminar o VHC após o tratamento farmacológico, quando comparados a monoinfectados. No âmbito do SUS, o tratamento antiviral padrão para portadores do genótipo 1 do VHC e do HIV é a administração de peguinterferon associado à Ribavirina. Quanto ao período de tratamento e aos indivíduos que devem ser incluídos, os dois protocolos terapêuticos mais recentes possuem divergências. A diretriz mais atual preconiza o tratamento de indivíduos respondedores precoces somados a respondedores virológicos lentos, enquanto a diretriz imediatamente anterior exclui na 12 semana indivíduos que não respondem completamente. Com base nessa divergência, esse estudo objetivou avaliar o custo-efetividade do tratamento contra o VHC em indivíduos portadores do genótipo 1, coinfectados com o HIV, virgens de tratamento antiviral, não cirróticos e imunologicamente estabilizados, submetidos às regras de tratamento antiviral estabelecidos pelas duas mais recentes diretrizes terapêuticas direcionadas ao atendimento pelo SUS. Para tal, foi elaborado um modelo matemático de decisão, baseado em cadeias de Markov, que simulou a progressão da doença hepática mediante o tratamento e não tratamento. Foi acompanhada uma coorte hipotética de mil indivíduos homens, maiores de 40 anos. Adotou-se a perspectiva do Sistema Único de Saúde, horizonte temporal de 30 anos e taxa de desconto de 5% para os custos e consequências clínicas. A extensão do tratamento para respondedores lentos proporcionou incremento de 0,28 anos de vida ajustados por qualidade (QALY), de 7% de sobrevida e aumento de 60% no número de indivíduos que eliminaram o VHC. Além dos esperados benefícios em eficácia, a inclusão de respondedores virológicos lentos mostrou-se uma estratégia custo-efetiva ao alcançar um incremental de custo efetividade de R$ 44.171/QALY, valor abaixo do limiar de aceitabilidade proposto pela Organização Mundial da Saúde OMS - (R$ 63.756,00/QALY). A análise de sensibilidade demonstrou que as possíveis incertezas contidas no modelo são incapazes de alterar o resultado final, evidenciando, assim, a robustez da análise. A inclusão de indivíduos coinfectados VHC/HIV respondedores virológicos lentos no protocolo de tratamento apresenta-se, do ponto de vista fármaco-econômico, como uma estratégia com relação de custoefetividade favorável para o Sistema Único de Saúde. Sua adoção é perfeitamente compatível com a perspectiva do sistema, ao retornar melhores resultados em saúdeassociados a custos abaixo de um teto orçamentário aceitável, e com o da sociedade, ao evitar em maior grau, complicações e internações quando comparado à não inclusão.
Resumo:
A Espectrometria de Massa em Tandem (MS/MS) é mundialmente considerada padrão ouro para a Triagem Neonatal (TN) de Erros Inatos do Metabolismo (IEM). Além de apresentar melhor sensibilidade e especificidade possibilita rastrear uma vasta gama de IEM usando um único teste. Atualmente o Programa Nacional de Triagem Neonatal (PNTN) rastreia cinco doenças (Fenilcetonúria, Hipotiroidismo Congênito, Fibrose Cística, Hemoglobinopatias e Deficiência da Biotinidase). Uma das metas do PNTN é o aprimoramento e a incorporação de novas doenças e/ou tecnologias. Com a recente recomendação da CONITEC (Comissão Nacional de Incorporação de Tecnologias) para aquisição do MS/MS para diagnóstico de doenças raras, vislumbra-se o incremento desta tecnologia para ampliação de doenças triadas, melhora da qualidade do teste diagnóstico, corroborando para melhorar qualidade de vida das crianças acometidas pelos EIM. Este trabalho teve como objetivo realizar uma análise de custo efetividade, para incorporação da tecnologia de tandem MS/MS na triagem neonatal, sob a perspectiva do SUS. Desta maneira buscou-se comparar diferentes cenários da TN com a tecnologia atualmente utilizada (Fluorimetria) somente para Fenilcetonúria (PKU), e com MS/MS para rastreio da PKU e da Deficiência de Cadeia Média Acyl-Coenzima Desidrogenase (MCAD). Para tanto construiu-se um modelo matemático de decisão baseados em cadeias de Markov que simulou a TN da PKU e da MCAD, bem como a história natural da MCAD. Foi acompanhada uma coorte hipotética de cem mil recém-nascidos. O horizonte temporal adotado foi a expectativa de vida da população brasileira de 78 anos de acordo com IBGE. Utilizou-se uma taxa de desconto de 5% para os custos e consequências clínicas para ambos os cenários propostos. Quando incorporado o MS/MS para triagem da PKU os ganhos em saúde continuaram os mesmos, pois o desempenho do MS/MS e da Fluorimetria foram praticamente iguais (efetividade), porém o custo incremental foi quatro vezes maior para a mesma efetividade, o que torna o MS/MS somente para PKU não custo efetiva (dominada). No entanto, quando analisado o cenário do MS/MS para triagem da PKU e da MCAD o custo incremental do MS/MS no PNTN foi menor por causa da economia feita uma vez que é possível realizar ambos os testes no mesmo o teste do pezinho atual.