172 resultados para Optimal Taxation
Resumo:
Distributed generation (DG) resources are commonly used in the electric systems to obtain minimum line losses, as one of the benefits of DG, in radial distribution systems. Studies have shown the importance of appropriate selection of location and size of DGs. This paper proposes an analytical method for solving optimal distributed generation placement (ODGP) problem to minimize line losses in radial distribution systems using loss sensitivity factor (LSF) based on bus-injection to branch-current (BIBC) matrix. The proposed method is formulated and tested on 12 and 34 bus radial distribution systems. The classical grid search algorithm based on successive load flows is employed to validate the results. The main advantages of the proposed method as compared with the other conventional methods are the robustness and no need to calculate and invert large admittance or Jacobian matrices. Therefore, the simulation time and the amount of computer memory, required for processing data especially for the large systems, decreases.
Resumo:
In this paper, load profile and operational goal are used to find optimal sizing of combined PV-energy storage for a future grid-connected residential building. As part of this approach, five operational goals are introduced and the annual cost for each operation goal has been assessed. Finally, the optimal sizing for combined PV-energy storage has been determined, using direct search method. In addition, sensitivity of the annual cost to different parameters has been analyzed.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
We address the problem of finite horizon optimal control of discrete-time linear systems with input constraints and uncertainty. The uncertainty for the problem analysed is related to incomplete state information (output feedback) and stochastic disturbances. We analyse the complexities associated with finding optimal solutions. We also consider two suboptimal strategies that could be employed for larger optimization horizons.
Resumo:
The Australian Taxation Office (AT)) attempted to challenge both the private equity fund reliance on double tax agreements and the assertion that profits were capital in nature in its dispute with private equity group TPG. Failure to resolve the dispute resulted in the ATO issuing two taxation determinations: TD 2010/20 which states that the general anti-avoidance provisions can apply to arrangements designed to alter the intended effect of Australia's international tax agreements net; and TD 2010/21 which states that the profits on the sale of shares in a company group acquired in a leveraged buyout is assessable income. The purpose of this article is to determine the effectiveness of the administrative rulings regime as a regulatory strategy. This article, by using the TPG-Myer scenario and subsequent tax determinations as a case study, collects qualitative data which is then analysed (and triangulated) using tonal and thematic analysis. Contemporaneous commentary of private equity stakeholders, tax professionals, and media observations are analysed and evaluated within a framework of responsive regulation and utilising the current ATO compliance model. Contrary to the stated purpose of the ATO rulings regime to alleviate complexities in Australian taxation law and provide certainty to taxpayers, and despite the de facto law status afforded these rulings, this study found that the majority of private equity stakeholders and their advisors perceived that greater uncertainty was created by the two determinations. Thus, this study found that in the context of private equity fund investors, a responsive regulation measure in the form of taxation determinations was not effective.
Resumo:
This paper addresses the problem of determining optimal designs for biological process models with intractable likelihoods, with the goal of parameter inference. The Bayesian approach is to choose a design that maximises the mean of a utility, and the utility is a function of the posterior distribution. Therefore, its estimation requires likelihood evaluations. However, many problems in experimental design involve models with intractable likelihoods, that is, likelihoods that are neither analytic nor can be computed in a reasonable amount of time. We propose a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo (MCMC) algorithm of Müller et al. (2004). Indirect inference employs an auxiliary model with a tractable likelihood in conjunction with the generative model, the assumed true model of interest, which has an intractable likelihood. Our approach is to estimate a map between the parameters of the generative and auxiliary models, using simulations from the generative model. An II posterior distribution is formed to expedite utility estimation. We also present a modification to the utility that allows the Müller algorithm to sample from a substantially sharpened utility surface, with little computational effort. Unlike competing methods, the II approach can handle complex design problems for models with intractable likelihoods on a continuous design space, with possible extension to many observations. The methodology is demonstrated using two stochastic models; a simple tractable death process used to validate the approach, and a motivating stochastic model for the population evolution of macroparasites.
Resumo:
Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.
Resumo:
The efficiency of the nitrogen (N) application rates 0, 120, 180 and 240 kg N ha−1 in combination with low or medium water levels in the cultivation of winter wheat (Triticum aestivum L.) cv. Kupava was studied for the 2005–2006 and 2006–2007 growing seasons in the Khorezm region of Uzbekistan. The results show an impact of the initial soil Nmin (NO3-N + NH4-N) levels measured at wheat seeding on the N fertilizer rates applied. When the Nmin content in the 0–50 cm soil layer was lower than 10 mg kg−1 during wheat seeding in 2005, the N rate of 180 kg ha−1 was found to be the most effective for achieving high grain yields of high quality. With a higher Nmin content of about 30 mg kg−1 as was the case in the 2006 season, 120 kg N ha−1 was determined as being the technical and economical optimum. The temporal course of N2O emissions of winter wheat cultivation for the two water-level studies shows that emissions were strongly influenced by irrigation and N-fertilization. Extremely high emissions were measured immediately after fertilizer application events that were combined with irrigation events. Given the high impact of N-fertilizer and irrigation-water management on N2O emissions, it can be concluded that present N-management practices should be modified to mitigate emissions of N2O and to achieve higher fertilizer use efficiency.
Resumo:
Rapid diagnostic tests (RDTs) represent important tools to diagnose malaria infection. To improve understanding of the variable performance of RDTs that detect the major target in Plasmodium falciparum, namely, histidine-rich protein 2 (HRP2), and to inform the design of better tests, we undertook detailed mapping of the epitopes recognized by eight HRP-specific monoclonal antibodies (MAbs). To investigate the geographic skewing of this polymorphic protein, we analyzed the distribution of these epitopes in parasites from geographically diverse areas. To identify an ideal amino acid motif for a MAb to target in HRP2 and in the related protein HRP3, we used a purpose-designed script to perform bioinformatic analysis of 448 distinct gene sequences from pfhrp2 and from 99 sequences from the closely related gene pfhrp3. The frequency and distribution of these motifs were also compared to the MAb epitopes. Heat stability testing of MAbs immobilized on nitrocellulose membranes was also performed. Results of these experiments enabled the identification of MAbs with the most desirable characteristics for inclusion in RDTs, including copy number and coverage of target epitopes, geographic skewing, heat stability, and match with the most abundant amino acid motifs identified. This study therefore informs the selection of MAbs to include in malaria RDTs as well as in the generation of improved MAbs that should improve the performance of HRP-detecting malaria RDTs.
Resumo:
A system requiring a waste management license from an enforcement agency has been introduced in many countries. A license system is usually coupled with fines, a manifest, and a disposal tax. However, these policy devices have not been integrated into an optimal policy. In this paper we derive an optimal waste management policy by using those policy devices. Waste management policies are met with three difficult problems: asymmetric information, the heterogeneity of waste management firms, and non-compliance by waste management firms and waste disposers. The optimal policy in this paper overcomes all three problems.
Resumo:
Maintenance decisions for large-scale asset systems are often beyond an asset manager's capacity to handle. The presence of a number of possibly conflicting decision criteria, the large number of possible maintenance policies, and the reality of budget constraints often produce complex problems, where the underlying trade-offs are not apparent to the asset manager. This paper presents the decision support tool "JOB" (Justification and Optimisation of Budgets), which has been designed to help asset managers of large systems assess, select, interpret and optimise the effects of their maintenance policies in the presence of limited budgets. This decision support capability is realized through an efficient, scalable backtracking- based algorithm for the optimisation of maintenance policies, while enabling the user to view a number of solutions near this optimum and explore tradeoffs with other decision criteria. To assist the asset manager in selecting between various policies, JOB also provides the capability of Multiple Criteria Decision Making. In this paper, the JOB tool is presented and its applicability for the maintenance of a complex power plant system.
Resumo:
This paper translates the concepts of sustainable production to three dimensions of economic, environmental and ecological sustainability to analyze optimal production scales by solving optimizing problems. Economic optimization seeks input-output combinations to maximize profits. Environmental optimization searches for input-output combinations that minimize the polluting effects of materials balance on the surrounding environment. Ecological optimization looks for input-output combinations that minimize the cumulative destruction of the entire ecosystem. Using an aggregate space, the framework illustrates that these optimal scales are often not identical because markets fail to account for all negative externalities. Profit-maximizing firms normally operate at the scales which are larger than optimal scales from the viewpoints of environmental and ecological sustainability; hence policy interventions are favoured. The framework offers a useful tool for efficiency studies and policy implication analysis. The paper provides an empirical investigation using a data set of rice farms in South Korea.
Resumo:
This paper investigates demodulation of differentially phase modulated signals DPMS using optimal HMM filters. The optimal HMM filter presented in the paper is computationally of order N3 per time instant, where N is the number of message symbols. Previously, optimal HMM filters have been of computational order N4 per time instant. Also, suboptimal HMM filters have be proposed of computation order N2 per time instant. The approach presented in this paper uses two coupled HMM filters and exploits knowledge of ...
Resumo:
In this paper conditional hidden Markov model (HMM) filters and conditional Kalman filters (KF) are coupled together to improve demodulation of differential encoded signals in noisy fading channels. We present an indicator matrix representation for differential encoded signals and the optimal HMM filter for demodulation. The filter requires O(N3) calculations per time iteration, where N is the number of message symbols. Decision feedback equalisation is investigated via coupling the optimal HMM filter for estimating the message, conditioned on estimates of the channel parameters, and a KF for estimating the channel states, conditioned on soft information message estimates. The particular differential encoding scheme examined in this paper is differential phase shift keying. However, the techniques developed can be extended to other forms of differential modulation. The channel model we use allows for multiplicative channel distortions and additive white Gaussian noise. Simulation studies are also presented.