974 resultados para Linear decision rules
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The treatment of a tumor with ionizing radiation is an ongoing process with well differentiated stages. These ones include the tumor diagnosis and location, the decision on the treatment strategy, the absorbed dose planning and calculation, the treatment administration, the absorbed dose verification and the evaluation of results in short and long terms. The quality of a radiotherapy procedure is closely linked to factors that may be classified as clinical, such as the diagnosis, the tumor location, the treatment strategy chosen and the continuous treatment reassessment; dosimetric or physical, such as the uncertainty in the dose calculation, its optimization and verification, the suitability of the equipment to provide a radiation beam consistent with the treatment planning; finally, others which are related to the practical application of radiotherapy treatment and the handling of the patient. In order to analyze the radiotherapy quality, one should realize that the three aspects (medical, physical or dosimetric and practical application) should be considered in a combined way. This means that numerous actions of the radiotherapists, medical physicists and technicians in radiotherapy should be held jointly and their knowledge level will significantly affect the treatment quality. In this study, the main physical parameters used in dosimetry are defined as well as determined experimentally for a linear accelerator Mevatron - MXT. With this, it is intended to provide recommendations for the physical aspects of Quality Assurance (QA) in the radiotherapy treatments, and these will usually be applied by professionals in Medical Physics. In addition to these instructions, it is recommended that additional texts are prepared to address in detail the clinical aspects of the treatments QA
Resumo:
We consider some of the relations that exist between real Szegö polynomials and certain para-orthogonal polynomials defined on the unit circle, which are again related to certain orthogonal polynomials on [-1, 1] through the transformation x = (z1/2+z1/2)/2. Using these relations we study the interpolatory quadrature rule based on the zeros of polynomials which are linear combinations of the orthogonal polynomials on [-1, 1]. In the case of any symmetric quadrature rule on [-1, 1], its associated quadrature rule on the unit circle is also given.
Resumo:
Pós-graduação em Agronomia (Irrigação e Drenagem) - FCA
Resumo:
From the characterization of biophysical attributes of the watershed (slope, soil types, capacity to land use and land cover), this article, used the multi-criteria analysis method – Weighted Linear Combination, defined priority areas for adaptation to the use of land as to its capacity of use. With this methodological approach, were created for the watershed under study, four classes, formed by different combinations of biophysical attributes (discrete data), representing levels of priorities for agricultural land use. The Multicriteria Evaluation in a GIS is suitable for the mapping of priority areas to the suitability of land use in watersheds. The geospatial information on the biophysical environment, generated from the methodological procedures described in this article, has a high positive potential to guide the rational planning of the use of natural resources and territorial occupation, besides serving as a powerful instrument to guide policies and collective processes of decision on the use and land cover.
Resumo:
Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.
Resumo:
Although praised for their rationality, humans often make poor decisions, even in simple situations. In the repeated binary choice experiment, an individual has to choose repeatedly between the same two alternatives, where a reward is assigned to one of them with fixed probability. The optimal strategy is to perseverate with choosing the alternative with the best expected return. Whereas many species perseverate, humans tend to match the frequencies of their choices to the frequencies of the alternatives, a sub-optimal strategy known as probability matching. Our goal was to find the primary cognitive constraints under which a set of simple evolutionary rules can lead to such contrasting behaviors. We simulated the evolution of artificial populations, wherein the fitness of each animat (artificial animal) depended on its ability to predict the next element of a sequence made up of a repeating binary string of varying size. When the string was short relative to the animats' neural capacity, they could learn it and correctly predict the next element of the sequence. When it was long, they could not learn it, turning to the next best option: to perseverate. Animats from the last generation then performed the task of predicting the next element of a non-periodical binary sequence. We found that, whereas animats with smaller neural capacity kept perseverating with the best alternative as before, animats with larger neural capacity, which had previously been able to learn the pattern of repeating strings, adopted probability matching, being outperformed by the perseverating animats. Our results demonstrate how the ability to make predictions in an environment endowed with regular patterns may lead to probability matching under less structured conditions. They point to probability matching as a likely by-product of adaptive cognitive strategies that were crucial in human evolution, but may lead to sub-optimal performances in other environments.
Resumo:
Background: This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results: The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions: We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.
Resumo:
Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.
Resumo:
Multi-input multi-output (MIMO) technology is an emerging solution for high data rate wireless communications. We develop soft-decision based equalization techniques for frequency selective MIMO channels in the quest for low-complexity equalizers with BER performance competitive to that of ML sequence detection. We first propose soft decision equalization (SDE), and demonstrate that decision feedback equalization (DFE) based on soft-decisions, expressed via the posterior probabilities associated with feedback symbols, is able to outperform hard-decision DFE, with a low computational cost that is polynomial in the number of symbols to be recovered, and linear in the signal constellation size. Building upon the probabilistic data association (PDA) multiuser detector, we present two new MIMO equalization solutions to handle the distinctive channel memory. With their low complexity, simple implementations, and impressive near-optimum performance offered by iterative soft-decision processing, the proposed SDE methods are attractive candidates to deliver efficient reception solutions to practical high-capacity MIMO systems. Motivated by the need for low-complexity receiver processing, we further present an alternative low-complexity soft-decision equalization approach for frequency selective MIMO communication systems. With the help of iterative processing, two detection and estimation schemes based on second-order statistics are harmoniously put together to yield a two-part receiver structure: local multiuser detection (MUD) using soft-decision Probabilistic Data Association (PDA) detection, and dynamic noise-interference tracking using Kalman filtering. The proposed Kalman-PDA detector performs local MUD within a sub-block of the received data instead of over the entire data set, to reduce the computational load. At the same time, all the inter-ference affecting the local sub-block, including both multiple access and inter-symbol interference, is properly modeled as the state vector of a linear system, and dynamically tracked by Kalman filtering. Two types of Kalman filters are designed, both of which are able to track an finite impulse response (FIR) MIMO channel of any memory length. The overall algorithms enjoy low complexity that is only polynomial in the number of information-bearing bits to be detected, regardless of the data block size. Furthermore, we introduce two optional performance-enhancing techniques: cross- layer automatic repeat request (ARQ) for uncoded systems and code-aided method for coded systems. We take Kalman-PDA as an example, and show via simulations that both techniques can render error performance that is better than Kalman-PDA alone and competitive to sphere decoding. At last, we consider the case that channel state information (CSI) is not perfectly known to the receiver, and present an iterative channel estimation algorithm. Simulations show that the performance of SDE with channel estimation approaches that of SDE with perfect CSI.
Resumo:
Tax planners often choose debt over equity financing. As this has led to increased corporate debt financing, many countries have introduced thin capitalization rules to secure their tax revenues. In a general capital structure model we analyze if thin capitalization rules affect dividend and financing decisions, and whether they can partially explain why corporations receive both debt and equity capital. We model the Belgian, German and Italian rules as examples. We find that the so-called Miller equilibrium and definite financing effects depend significantly on the underlying tax system. Further, our results are useful for the treasury to decide what thin capitalization type to implement.
Resumo:
Master production schedule (MPS) plays an important role in an integrated production planning system. It converts the strategic planning defined in a production plan into the tactical operation execution. The MPS is also known as a tool for top management to control over manufacture resources and becomes input of the downstream planning levels such as material requirement planning (MRP) and capacity requirement planning (CRP). Hence, inappropriate decision on the MPS development may lead to infeasible execution, which ultimately causes poor delivery performance. One must ensure that the proposed MPS is valid and realistic for implementation before it is released to real manufacturing system. In practice, where production environment is stochastic in nature, the development of MPS is no longer simple task. The varying processing time, random event such as machine failure is just some of the underlying causes of uncertainty that may be hardly addressed at planning stage so that in the end the valid and realistic MPS is tough to be realized. The MPS creation problem becomes even more sophisticated as decision makers try to consider multi-objectives; minimizing inventory, maximizing customer satisfaction, and maximizing resource utilization. This study attempts to propose a methodology for MPS creation which is able to deal with those obstacles. This approach takes into account uncertainty and makes trade off among conflicting multi-objectives at the same time. It incorporates fuzzy multi-objective linear programming (FMOLP) and discrete event simulation (DES) for MPS development.
Resumo:
We consider collective decision problems given by a profile of single-peaked preferences defined over the real line and a set of pure public facilities to be located on the line. In this context, Bochet and Gordon (2012) provide a large class of priority rules based on efficiency, object-population monotonicity and sovereignty. Each such rule is described by a fixed priority ordering among interest groups. We show that any priority rule which treats agents symmetrically — anonymity — respects some form of coherence across collective decision problems — reinforcement — and only depends on peak information — peakonly — is a weighted majoritarian rule. Each such rule defines priorities based on the relative size of the interest groups and specific weights attached to locations. We give an explicit account of the richness of this class of rules.
Resumo:
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.
Resumo:
Once more, agriculture threatened to prevent all progress in multilateral trade rule-making at the Ninth WTO Ministerial Conference in December 2013. But this time, the “magic of Bali” worked. After the clock had been stopped mainly because of the food security file, the ministers adopted a comprehensive package of decisions and declarations mainly in respect of development issues. Five are about agriculture. Decision 38 on Public Stockholding for Food Security Purposes contains a “peace clause” which will now be shielding certain stockpile programmes from subsidy complaints in formal litigation. This article provides contextual background and analyses this decision from a legal perspective. It finds that, at best, Decision 38 provides a starting point for a WTO Work Programme for food security, for review at the Eleventh Ministerial Conference which will probably take place in 2017. At worst, it may unduly widen the limited window for government-financed competition existing under present rules in the WTO Agreement on Agriculture – yet without increasing global food security or even guaranteeing that no subsidy claims will be launched, or entertained, under the WTO dispute settlement mechanism. Hence, the Work Programme should find more coherence between farm support and socio-economic and trade objectives when it comes to stockpiles. This also encompasses a review of the present WTO rules applying to other forms of food reserves and to regional or “virtual” stockpiles. Another “low hanging fruit” would be a decision to exempt food aid purchases from export restrictions.