972 resultados para Non-optimal Codon
Resumo:
Peer reviewed
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.
Resumo:
Background: The move toward evidence-based education has led to increasing numbers of randomised trials in schools. However, the literature on recruitment to non-clinical trials is relatively underdeveloped, when compared to that of clinical trials. Recruitment to school-based randomised trials is, however, challenging; even more so when the focus of the study is a sensitive issue such as sexual health. This article reflects on the challenges of recruiting post-primary schools, adolescent pupils and parents to a cluster randomised feasibility trial of a sexual health intervention, and the strategies employed to address them.
Methods: The Jack Trial was funded by the UK National Institute for Health Research (NIHR). It comprised a feasibility study of an interactive film-based sexual health intervention entitled If I Were Jack, recruiting over 800 adolescents from eight socio-demographically diverse post-primary schools in Northern Ireland. It aimed to determine the facilitators and barriers to recruitment and retention to a school-based sexual health trial and identify optimal multi-level strategies for an effectiveness study. As part of an embedded process evaluation, we conducted semi-structured interviews and focus groups with principals, vice-principals, teachers, pupils and parents recruited to the study as well as classroom observations and a parents’ survey.
Results: With reference to Social Learning Theory, we identified a number of individual, behavioural and environmental level factors which influenced recruitment. Commonly identified facilitators included perceptions of the relevance and potential benefit of the intervention to adolescents, the credibility of the organisation and individuals running the study, support offered by trial staff, and financial incentives. Key barriers were prior commitment to other research, lack of time and resources, and perceptions that the intervention was incompatible with pupil or parent needs or the school ethos.
Conclusions: Reflecting on the methodological challenges of recruiting to a school-based sexual health feasibility trial, this study highlights pertinent general and trial-specific facilitators and barriers to recruitment, which will prove useful for future trials with schools, adolescent pupils and parents.
Resumo:
We consider a mechanical problem concerning a 2D axisymmetric body moving forward on the plane and making slow turns of fixed magnitude about its axis of symmetry. The body moves through a medium of non-interacting particles at rest, and collisions of particles with the body's boundary are perfectly elastic (billiard-like). The body has a blunt nose: a line segment orthogonal to the symmetry axis. It is required to make small cavities with special shape on the nose so as to minimize its aerodynamic resistance. This problem of optimizing the shape of the cavities amounts to a special case of the optimal mass transfer problem on the circle with the transportation cost being the squared Euclidean distance. We find the exact solution for this problem when the amplitude of rotation is smaller than a fixed critical value, and give a numerical solution otherwise. As a by-product, we get explicit description of the solution for a class of optimal transfer problems on the circle.
Resumo:
Purposes. The optimal treatment of N2 non-small cell lung cancer (NSCLC) in older patients is still debate and represent an important treatment and ethical problem. Patients and methods. Between January 2000 to December 2010, 273 older patients underwent lung resection for (NSCLC). Results. The overall-operative mortality was 9.5%. Risk factors for in-hospital mortality were pneumonectomy and poli-vasculopathy. One, 3 and 5-year survival were 73%, 23% and 16% respectively. Conclusions. In potentially operable older patients with NSCLC we need to make every effort to exclude N2 involvement because very poor long-term survival. Pneumonectomy in older patients gains prohibitive in-hospital mortality.
Resumo:
Perimeter-baiting of non-crop vegetation using toxic protein baits was developed overseas as a technique for control of melon fly, Zeugodacus (Zeugodacus) cucurbitae (Coquillett) (formerly Bactrocera (Zeugodacus) cucurbitae), and evidence suggests that this technique may also be effective in Australia for control of local fruit fly species in vegetable crops. Using field cage trials and laboratory reared flies, primary data were generated to support this approach by testing fruit flies' feeding response to protein when applied to eight plant species (forage sorghum, grain sorghum, sweet corn, sugarcane, eggplant, cassava, lilly pilly and orange jessamine) and applied at three heights (1, 1.5 and 2 m). When compared across the plants, Queensland fruit fly, Bactrocera tryoni (Froggatt), most commonly fed on protein bait applied to sugarcane and cassava, whereas more cucumber fly, Zeugodacus (Austrodacus) cucumis (French) (formerly Bactrocera (Austrodacus) cucumis), fed on bait applied to sweet corn and forage sorghum. When protein bait was applied at different heights, B. tryoni responded most to bait placed in the upper part of the plants (2 m), whereas Z. cucumis preferred bait placed lower on the plants (1 and 1.5 m). These results have implications for optimal placement of protein bait for best practice control of fruit flies in vegetable crops and suggest that the two species exhibit different foraging behaviours.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.
Resumo:
Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.
Resumo:
Introducción: El cáncer colorrectal es una patología con alto impacto en la salud pública, debido a su prevalencia, incidencia, severidad, costo e impacto en la salud mental y física del individuo y la familia. Ensayos clínicos realizados en pacientes con antecedente de infarto al miocardio que consumían ácido acetil salicílico (asa), calcio con y sin vitamina D, mostraron asociación entre el consumo de estos medicamentos y disminución en la incidencia en cáncer colorrectal y pólipos adenomatosos. Objetivo: Evaluar la literatura sobre el uso de asa, calcio con y sin vitamina D con relación a su impacto en la prevención del cáncer colorrectal y pólipos adenomatosos. Métodos: Se realizó revisión sistemática buscando ensayos clínicos realizados en pacientes con factores de riesgo para cáncer colorrectal y pólipos adenomatosos que usaron asa, calcio con y sin vitamina D fueron incluidos. Resultados: se escogieron 105 para la revisión sistemática. Conclusiones: Es necesario desarrollar más estudios que lleven a evaluar el efecto protector de la aspirina, calcio y vitamina D. En los artículos revisados la aspirina a dosis de 81 a 325 mg día se correlaciona con reducción de riesgo de aparición de CRC aunque la dosis ideal, el tiempo de inicio y la duración de la ingesta continua no son claros. Hacen falta estudios que comparen poblaciones con ingesta de asa a diferentes dosis.
Resumo:
In recent years Electric Vehicles (EVs) are getting more importance as future transport systems, due to the increase of the concerns relevant to the greenhouse gases emission and the use fossil fuel. The management of the charging and discharging process of EVs could provide new business model for participating in the electricity markets. Moreover, vehicle to grid systems have the potential of increasing utility system flexibility. This thesis develops some models for the optimal integration of the EVs in the electricity market. In particular, the thesis focuses on the optimal bidding strategy of an EV aggregator participating to both the day ahead market and the secondary reserve market. The aggregator profit is maximized taking into account the energy balance equation, as well as the technical constraints of energy settlement, power supply and state of charge of the EVs. The results obtained by using the GAMS (General Algebraic Modelling System) environment are presented and discussed.
Resumo:
The present work proposes different approaches to extend the mathematical methods of supervisory energy management used in terrestrial environments to the maritime sector, that diverges in constraints, variables and disturbances. The aim is to find the optimal real-time solution that includes the minimization of a defined track time, while maintaining the classical energetic approach. Starting from analyzing and modelling the powertrain and boat dynamics, the energy economy problem formulation is done, following the mathematical principles behind the optimal control theory. Then, an adaptation aimed in finding a winning strategy for the Monaco Energy Boat Challenge endurance trial is performed via ECMS and A-ECMS control strategies, which lead to a more accurate knowledge of energy sources and boat’s behaviour. The simulations show that the algorithm accomplishes fuel economy and time optimization targets, but the latter adds huge tuning and calculation complexity. In order to assess a practical implementation on real hardware, the knowledge of the previous approaches has been translated into a rule-based algorithm, that let it be run on an embedded CPU. Finally, the algorithm has been tuned and tested in a real-world race scenario, showing promising results.
Resumo:
In this project an optimal pose selection method for the calibration of an overconstrained Cable-Driven Parallel robot is presented. This manipulator belongs to a subcategory of parallel robots, where the classic rigid "legs" are replaced by cables. Cables are flexible elements that bring advantages and disadvantages to the robot modeling. For this reason, there are many open research issues, and the calibration of geometric parameters is one of them. The identification of the geometry of a robot, in particular, is usually called Kinematic Calibration. Many methods have been proposed in the past years for the solution of the latter problem. Although these methods are based on calibration using different kinematic models, when the robot’s geometry becomes more complex, their robustness and reliability decrease. This fact makes the selection of the calibration poses more complicated. The position and the orientation of the endeffector in the workspace become important in terms of selection. Thus, in general, it is necessary to evaluate the robustness of the chosen calibration method, by means, for example, of a parameter such as the observability index. In fact, it is known from the theory, that the maximization of the above mentioned index identifies the best choice of calibration poses, and consequently, using this pose set may improve the calibration process. The objective of this thesis is to analyze optimization algorithms which aim to calculate an optimal choice of poses both in quantitative and qualitative terms. Quantitatively, because it is of fundamental importance to understand how many poses are needed. Not necessarily a greater number of poses leads to a better result. Qualitatively, because it is useful to understand if the selected combination of poses actually gives additional information in the process of the identification of the parameters.